00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2379 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3640 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.124 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.145 Using shallow fetch with depth 1 00:00:00.145 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.145 > git --version # timeout=10 00:00:00.170 > git --version # 'git version 2.39.2' 00:00:00.170 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.201 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.201 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.398 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.411 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.423 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.423 > git config core.sparsecheckout # timeout=10 00:00:04.434 > git read-tree -mu HEAD # timeout=10 00:00:04.450 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.468 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.468 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.563 [Pipeline] Start of Pipeline 00:00:04.577 [Pipeline] library 00:00:04.578 Loading library shm_lib@master 00:00:04.578 Library shm_lib@master is cached. Copying from home. 00:00:04.593 [Pipeline] node 00:00:04.607 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.609 [Pipeline] { 00:00:04.619 [Pipeline] catchError 00:00:04.621 [Pipeline] { 00:00:04.635 [Pipeline] wrap 00:00:04.645 [Pipeline] { 00:00:04.653 [Pipeline] stage 00:00:04.656 [Pipeline] { (Prologue) 00:00:04.674 [Pipeline] echo 00:00:04.675 Node: VM-host-SM0 00:00:04.681 [Pipeline] cleanWs 00:00:04.692 [WS-CLEANUP] Deleting project workspace... 00:00:04.692 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.698 [WS-CLEANUP] done 00:00:04.932 [Pipeline] setCustomBuildProperty 00:00:04.997 [Pipeline] httpRequest 00:00:05.324 [Pipeline] echo 00:00:05.325 Sorcerer 10.211.164.20 is alive 00:00:05.335 [Pipeline] retry 00:00:05.338 [Pipeline] { 00:00:05.352 [Pipeline] httpRequest 00:00:05.356 HttpMethod: GET 00:00:05.356 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.357 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.358 Response Code: HTTP/1.1 200 OK 00:00:05.358 Success: Status code 200 is in the accepted range: 200,404 00:00:05.359 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.930 [Pipeline] } 00:00:05.947 [Pipeline] // retry 00:00:05.953 [Pipeline] sh 00:00:06.231 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.247 [Pipeline] httpRequest 00:00:06.818 [Pipeline] echo 00:00:06.820 Sorcerer 10.211.164.20 is alive 00:00:06.827 [Pipeline] retry 00:00:06.829 [Pipeline] { 00:00:06.843 [Pipeline] httpRequest 00:00:06.847 HttpMethod: GET 00:00:06.848 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.848 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:06.855 Response Code: HTTP/1.1 200 OK 00:00:06.855 Success: Status code 200 is in the accepted range: 200,404 00:00:06.856 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:49.198 [Pipeline] } 00:00:49.220 [Pipeline] // retry 00:00:49.228 [Pipeline] sh 00:00:49.512 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:52.089 [Pipeline] sh 00:00:52.381 + git -C spdk log --oneline -n5 00:00:52.381 c13c99a5e test: Various fixes for Fedora40 00:00:52.381 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:52.381 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:52.381 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:52.381 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:52.400 [Pipeline] writeFile 00:00:52.415 [Pipeline] sh 00:00:52.697 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:52.713 [Pipeline] sh 00:00:53.002 + cat autorun-spdk.conf 00:00:53.002 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.002 SPDK_TEST_NVMF=1 00:00:53.002 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.002 SPDK_TEST_VFIOUSER=1 00:00:53.002 SPDK_TEST_USDT=1 00:00:53.002 SPDK_RUN_UBSAN=1 00:00:53.002 SPDK_TEST_NVMF_MDNS=1 00:00:53.002 NET_TYPE=virt 00:00:53.002 SPDK_JSONRPC_GO_CLIENT=1 00:00:53.002 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:53.009 RUN_NIGHTLY=1 00:00:53.011 [Pipeline] } 00:00:53.024 [Pipeline] // stage 00:00:53.039 [Pipeline] stage 00:00:53.041 [Pipeline] { (Run VM) 00:00:53.054 [Pipeline] sh 00:00:53.336 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:53.336 + echo 'Start stage prepare_nvme.sh' 00:00:53.336 Start stage prepare_nvme.sh 00:00:53.336 + [[ -n 4 ]] 00:00:53.336 + disk_prefix=ex4 00:00:53.336 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:53.336 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:53.336 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:53.336 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.336 ++ SPDK_TEST_NVMF=1 00:00:53.336 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.336 ++ SPDK_TEST_VFIOUSER=1 00:00:53.336 ++ SPDK_TEST_USDT=1 00:00:53.336 ++ SPDK_RUN_UBSAN=1 00:00:53.336 ++ SPDK_TEST_NVMF_MDNS=1 00:00:53.336 ++ NET_TYPE=virt 00:00:53.336 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:53.336 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:53.336 ++ RUN_NIGHTLY=1 00:00:53.336 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:53.336 + nvme_files=() 00:00:53.336 + declare -A nvme_files 00:00:53.336 + backend_dir=/var/lib/libvirt/images/backends 00:00:53.336 + nvme_files['nvme.img']=5G 00:00:53.336 + nvme_files['nvme-cmb.img']=5G 00:00:53.336 + nvme_files['nvme-multi0.img']=4G 00:00:53.336 + nvme_files['nvme-multi1.img']=4G 00:00:53.336 + nvme_files['nvme-multi2.img']=4G 00:00:53.336 + nvme_files['nvme-openstack.img']=8G 00:00:53.336 + nvme_files['nvme-zns.img']=5G 00:00:53.336 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:53.336 + (( SPDK_TEST_FTL == 1 )) 00:00:53.336 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:53.336 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:53.336 + for nvme in "${!nvme_files[@]}" 00:00:53.336 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:53.336 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.336 + for nvme in "${!nvme_files[@]}" 00:00:53.336 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:53.336 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.336 + for nvme in "${!nvme_files[@]}" 00:00:53.336 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:53.336 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:53.336 + for nvme in "${!nvme_files[@]}" 00:00:53.336 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:53.336 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.336 + for nvme in "${!nvme_files[@]}" 00:00:53.336 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:53.336 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.336 + for nvme in "${!nvme_files[@]}" 00:00:53.336 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:53.595 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.595 + for nvme in "${!nvme_files[@]}" 00:00:53.595 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:53.595 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.595 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:53.595 + echo 'End stage prepare_nvme.sh' 00:00:53.595 End stage prepare_nvme.sh 00:00:53.606 [Pipeline] sh 00:00:53.889 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:53.889 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:00:53.889 00:00:53.889 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:53.889 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:53.889 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:53.889 HELP=0 00:00:53.889 DRY_RUN=0 00:00:53.889 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:53.889 NVME_DISKS_TYPE=nvme,nvme, 00:00:53.889 NVME_AUTO_CREATE=0 00:00:53.889 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:53.889 NVME_CMB=,, 00:00:53.889 NVME_PMR=,, 00:00:53.889 NVME_ZNS=,, 00:00:53.889 NVME_MS=,, 00:00:53.889 NVME_FDP=,, 00:00:53.889 SPDK_VAGRANT_DISTRO=fedora39 00:00:53.889 SPDK_VAGRANT_VMCPU=10 00:00:53.889 SPDK_VAGRANT_VMRAM=12288 00:00:53.889 SPDK_VAGRANT_PROVIDER=libvirt 00:00:53.889 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:53.889 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:53.889 SPDK_OPENSTACK_NETWORK=0 00:00:53.889 VAGRANT_PACKAGE_BOX=0 00:00:53.889 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:53.889 FORCE_DISTRO=true 00:00:53.889 VAGRANT_BOX_VERSION= 00:00:53.889 EXTRA_VAGRANTFILES= 00:00:53.889 NIC_MODEL=e1000 00:00:53.889 00:00:53.889 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:00:53.889 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:57.177 Bringing machine 'default' up with 'libvirt' provider... 00:00:57.437 ==> default: Creating image (snapshot of base box volume). 00:00:57.437 ==> default: Creating domain with the following settings... 00:00:57.437 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731880853_9c449e3a081a48f4b41a 00:00:57.437 ==> default: -- Domain type: kvm 00:00:57.437 ==> default: -- Cpus: 10 00:00:57.437 ==> default: -- Feature: acpi 00:00:57.437 ==> default: -- Feature: apic 00:00:57.437 ==> default: -- Feature: pae 00:00:57.437 ==> default: -- Memory: 12288M 00:00:57.437 ==> default: -- Memory Backing: hugepages: 00:00:57.437 ==> default: -- Management MAC: 00:00:57.437 ==> default: -- Loader: 00:00:57.437 ==> default: -- Nvram: 00:00:57.437 ==> default: -- Base box: spdk/fedora39 00:00:57.437 ==> default: -- Storage pool: default 00:00:57.437 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731880853_9c449e3a081a48f4b41a.img (20G) 00:00:57.437 ==> default: -- Volume Cache: default 00:00:57.437 ==> default: -- Kernel: 00:00:57.437 ==> default: -- Initrd: 00:00:57.437 ==> default: -- Graphics Type: vnc 00:00:57.437 ==> default: -- Graphics Port: -1 00:00:57.437 ==> default: -- Graphics IP: 127.0.0.1 00:00:57.437 ==> default: -- Graphics Password: Not defined 00:00:57.437 ==> default: -- Video Type: cirrus 00:00:57.437 ==> default: -- Video VRAM: 9216 00:00:57.437 ==> default: -- Sound Type: 00:00:57.437 ==> default: -- Keymap: en-us 00:00:57.437 ==> default: -- TPM Path: 00:00:57.437 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:57.437 ==> default: -- Command line args: 00:00:57.437 ==> default: -> value=-device, 00:00:57.437 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:57.437 ==> default: -> value=-drive, 00:00:57.437 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:57.437 ==> default: -> value=-device, 00:00:57.437 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.437 ==> default: -> value=-device, 00:00:57.437 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:57.437 ==> default: -> value=-drive, 00:00:57.437 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:57.437 ==> default: -> value=-device, 00:00:57.437 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.437 ==> default: -> value=-drive, 00:00:57.437 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:57.437 ==> default: -> value=-device, 00:00:57.437 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.437 ==> default: -> value=-drive, 00:00:57.437 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:57.437 ==> default: -> value=-device, 00:00:57.437 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.696 ==> default: Creating shared folders metadata... 00:00:57.696 ==> default: Starting domain. 00:00:59.600 ==> default: Waiting for domain to get an IP address... 00:01:17.699 ==> default: Waiting for SSH to become available... 00:01:17.699 ==> default: Configuring and enabling network interfaces... 00:01:20.255 default: SSH address: 192.168.121.43:22 00:01:20.255 default: SSH username: vagrant 00:01:20.255 default: SSH auth method: private key 00:01:22.791 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:30.909 ==> default: Mounting SSHFS shared folder... 00:01:31.844 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:31.845 ==> default: Checking Mount.. 00:01:33.219 ==> default: Folder Successfully Mounted! 00:01:33.219 ==> default: Running provisioner: file... 00:01:33.786 default: ~/.gitconfig => .gitconfig 00:01:34.045 00:01:34.045 SUCCESS! 00:01:34.045 00:01:34.045 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:34.045 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:34.045 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:34.045 00:01:34.055 [Pipeline] } 00:01:34.070 [Pipeline] // stage 00:01:34.080 [Pipeline] dir 00:01:34.081 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:01:34.082 [Pipeline] { 00:01:34.095 [Pipeline] catchError 00:01:34.096 [Pipeline] { 00:01:34.108 [Pipeline] sh 00:01:34.387 + vagrant ssh-config --host vagrant 00:01:34.387 + sed -ne /^Host/,$p 00:01:34.387 + tee ssh_conf 00:01:36.921 Host vagrant 00:01:36.921 HostName 192.168.121.43 00:01:36.921 User vagrant 00:01:36.921 Port 22 00:01:36.921 UserKnownHostsFile /dev/null 00:01:36.921 StrictHostKeyChecking no 00:01:36.921 PasswordAuthentication no 00:01:36.921 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:36.921 IdentitiesOnly yes 00:01:36.921 LogLevel FATAL 00:01:36.921 ForwardAgent yes 00:01:36.921 ForwardX11 yes 00:01:36.921 00:01:36.935 [Pipeline] withEnv 00:01:36.937 [Pipeline] { 00:01:36.951 [Pipeline] sh 00:01:37.231 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:37.231 source /etc/os-release 00:01:37.231 [[ -e /image.version ]] && img=$(< /image.version) 00:01:37.231 # Minimal, systemd-like check. 00:01:37.231 if [[ -e /.dockerenv ]]; then 00:01:37.231 # Clear garbage from the node's name: 00:01:37.231 # agt-er_autotest_547-896 -> autotest_547-896 00:01:37.231 # $HOSTNAME is the actual container id 00:01:37.231 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:37.231 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:37.231 # We can assume this is a mount from a host where container is running, 00:01:37.231 # so fetch its hostname to easily identify the target swarm worker. 00:01:37.231 container="$(< /etc/hostname) ($agent)" 00:01:37.231 else 00:01:37.231 # Fallback 00:01:37.231 container=$agent 00:01:37.231 fi 00:01:37.231 fi 00:01:37.231 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:37.231 00:01:37.500 [Pipeline] } 00:01:37.516 [Pipeline] // withEnv 00:01:37.527 [Pipeline] setCustomBuildProperty 00:01:37.543 [Pipeline] stage 00:01:37.545 [Pipeline] { (Tests) 00:01:37.564 [Pipeline] sh 00:01:37.844 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:38.117 [Pipeline] sh 00:01:38.408 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:38.483 [Pipeline] timeout 00:01:38.484 Timeout set to expire in 1 hr 0 min 00:01:38.485 [Pipeline] { 00:01:38.500 [Pipeline] sh 00:01:38.780 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:39.348 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:39.361 [Pipeline] sh 00:01:39.640 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:39.913 [Pipeline] sh 00:01:40.194 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:40.469 [Pipeline] sh 00:01:40.749 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:41.008 ++ readlink -f spdk_repo 00:01:41.008 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:41.008 + [[ -n /home/vagrant/spdk_repo ]] 00:01:41.008 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:41.008 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:41.008 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:41.008 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:41.008 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:41.008 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:41.008 + cd /home/vagrant/spdk_repo 00:01:41.008 + source /etc/os-release 00:01:41.008 ++ NAME='Fedora Linux' 00:01:41.008 ++ VERSION='39 (Cloud Edition)' 00:01:41.008 ++ ID=fedora 00:01:41.008 ++ VERSION_ID=39 00:01:41.008 ++ VERSION_CODENAME= 00:01:41.008 ++ PLATFORM_ID=platform:f39 00:01:41.008 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:41.008 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:41.008 ++ LOGO=fedora-logo-icon 00:01:41.008 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:41.008 ++ HOME_URL=https://fedoraproject.org/ 00:01:41.008 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:41.008 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:41.008 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:41.008 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:41.008 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:41.008 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:41.008 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:41.008 ++ SUPPORT_END=2024-11-12 00:01:41.008 ++ VARIANT='Cloud Edition' 00:01:41.008 ++ VARIANT_ID=cloud 00:01:41.008 + uname -a 00:01:41.008 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:41.008 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:41.008 Hugepages 00:01:41.008 node hugesize free / total 00:01:41.008 node0 1048576kB 0 / 0 00:01:41.008 node0 2048kB 0 / 0 00:01:41.008 00:01:41.008 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:41.008 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:41.008 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:41.008 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:41.008 + rm -f /tmp/spdk-ld-path 00:01:41.008 + source autorun-spdk.conf 00:01:41.008 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.008 ++ SPDK_TEST_NVMF=1 00:01:41.008 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.008 ++ SPDK_TEST_VFIOUSER=1 00:01:41.008 ++ SPDK_TEST_USDT=1 00:01:41.008 ++ SPDK_RUN_UBSAN=1 00:01:41.008 ++ SPDK_TEST_NVMF_MDNS=1 00:01:41.008 ++ NET_TYPE=virt 00:01:41.008 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:41.008 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.008 ++ RUN_NIGHTLY=1 00:01:41.008 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.008 + [[ -n '' ]] 00:01:41.008 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:41.267 + for M in /var/spdk/build-*-manifest.txt 00:01:41.267 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:41.267 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.267 + for M in /var/spdk/build-*-manifest.txt 00:01:41.267 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.267 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.267 + for M in /var/spdk/build-*-manifest.txt 00:01:41.267 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.267 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.267 ++ uname 00:01:41.267 + [[ Linux == \L\i\n\u\x ]] 00:01:41.267 + sudo dmesg -T 00:01:41.267 + sudo dmesg --clear 00:01:41.267 + dmesg_pid=5221 00:01:41.267 + [[ Fedora Linux == FreeBSD ]] 00:01:41.267 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.267 + sudo dmesg -Tw 00:01:41.267 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.267 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.267 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.267 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.267 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.267 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.267 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.267 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.267 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.267 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.267 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.267 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.267 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.267 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:41.267 Test configuration: 00:01:41.267 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.267 SPDK_TEST_NVMF=1 00:01:41.267 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.267 SPDK_TEST_VFIOUSER=1 00:01:41.267 SPDK_TEST_USDT=1 00:01:41.267 SPDK_RUN_UBSAN=1 00:01:41.267 SPDK_TEST_NVMF_MDNS=1 00:01:41.267 NET_TYPE=virt 00:01:41.267 SPDK_JSONRPC_GO_CLIENT=1 00:01:41.267 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.267 RUN_NIGHTLY=1 22:01:37 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:41.267 22:01:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:41.267 22:01:37 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.267 22:01:37 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.267 22:01:37 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.267 22:01:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.267 22:01:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.267 22:01:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.268 22:01:37 -- paths/export.sh@5 -- $ export PATH 00:01:41.268 22:01:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.268 22:01:37 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:41.268 22:01:37 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:41.268 22:01:37 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731880897.XXXXXX 00:01:41.268 22:01:37 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731880897.wOiez2 00:01:41.268 22:01:37 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:41.268 22:01:37 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:41.268 22:01:37 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:41.268 22:01:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:41.268 22:01:37 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.268 22:01:37 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:41.268 22:01:37 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:41.268 22:01:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.268 22:01:37 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:01:41.268 22:01:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.268 22:01:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.268 22:01:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:41.268 22:01:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.268 Sun Nov 17 10:01:37 PM UTC 2024 00:01:41.268 22:01:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.268 LTS-67-gc13c99a5e 00:01:41.268 22:01:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:41.268 22:01:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.268 22:01:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.268 22:01:37 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:41.268 22:01:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:41.268 22:01:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.268 ************************************ 00:01:41.268 START TEST ubsan 00:01:41.268 ************************************ 00:01:41.268 using ubsan 00:01:41.268 22:01:37 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:41.268 00:01:41.268 real 0m0.000s 00:01:41.268 user 0m0.000s 00:01:41.268 sys 0m0.000s 00:01:41.268 22:01:37 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:41.268 ************************************ 00:01:41.268 END TEST ubsan 00:01:41.268 22:01:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.268 ************************************ 00:01:41.527 22:01:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:41.527 22:01:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:41.527 22:01:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:41.527 22:01:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:41.527 22:01:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:41.527 22:01:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:41.527 22:01:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:41.527 22:01:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:41.527 22:01:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:01:41.785 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:41.785 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:42.044 Using 'verbs' RDMA provider 00:01:57.489 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:09.700 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:09.700 go version go1.21.1 linux/amd64 00:02:09.958 Creating mk/config.mk...done. 00:02:09.958 Creating mk/cc.flags.mk...done. 00:02:09.958 Type 'make' to build. 00:02:09.958 22:02:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:09.958 22:02:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:09.958 22:02:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:09.958 22:02:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.958 ************************************ 00:02:09.958 START TEST make 00:02:09.958 ************************************ 00:02:09.958 22:02:06 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:10.217 make[1]: Nothing to be done for 'all'. 00:02:11.593 The Meson build system 00:02:11.593 Version: 1.5.0 00:02:11.593 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:11.593 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:11.593 Build type: native build 00:02:11.593 Project name: libvfio-user 00:02:11.593 Project version: 0.0.1 00:02:11.593 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.593 C linker for the host machine: cc ld.bfd 2.40-14 00:02:11.593 Host machine cpu family: x86_64 00:02:11.593 Host machine cpu: x86_64 00:02:11.593 Run-time dependency threads found: YES 00:02:11.593 Library dl found: YES 00:02:11.593 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.593 Run-time dependency json-c found: YES 0.17 00:02:11.593 Run-time dependency cmocka found: YES 1.1.7 00:02:11.593 Program pytest-3 found: NO 00:02:11.593 Program flake8 found: NO 00:02:11.593 Program misspell-fixer found: NO 00:02:11.593 Program restructuredtext-lint found: NO 00:02:11.593 Program valgrind found: YES (/usr/bin/valgrind) 00:02:11.593 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.593 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.593 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.593 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:11.593 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:11.593 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:11.593 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:11.593 Build targets in project: 8 00:02:11.593 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:11.593 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:11.593 00:02:11.593 libvfio-user 0.0.1 00:02:11.593 00:02:11.593 User defined options 00:02:11.593 buildtype : debug 00:02:11.593 default_library: shared 00:02:11.593 libdir : /usr/local/lib 00:02:11.593 00:02:11.593 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.161 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:12.419 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:12.419 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:12.419 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:12.419 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:12.419 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:12.419 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:12.419 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:12.419 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:12.419 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:12.419 [10/37] Compiling C object samples/null.p/null.c.o 00:02:12.420 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:12.420 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:12.420 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:12.420 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:12.683 [15/37] Compiling C object samples/server.p/server.c.o 00:02:12.683 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:12.683 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:12.683 [18/37] Compiling C object samples/client.p/client.c.o 00:02:12.683 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:12.683 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:12.683 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:12.683 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:12.683 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:12.683 [24/37] Linking target samples/client 00:02:12.683 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:12.683 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:12.683 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:02:12.683 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:12.683 [29/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:12.945 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:12.945 [31/37] Linking target test/unit_tests 00:02:12.945 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:12.945 [33/37] Linking target samples/lspci 00:02:12.945 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:12.945 [35/37] Linking target samples/gpio-pci-idio-16 00:02:12.945 [36/37] Linking target samples/null 00:02:12.945 [37/37] Linking target samples/server 00:02:12.945 INFO: autodetecting backend as ninja 00:02:12.945 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:12.945 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:13.547 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:13.547 ninja: no work to do. 00:02:21.674 The Meson build system 00:02:21.674 Version: 1.5.0 00:02:21.674 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:21.674 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:21.674 Build type: native build 00:02:21.674 Program cat found: YES (/usr/bin/cat) 00:02:21.674 Project name: DPDK 00:02:21.674 Project version: 23.11.0 00:02:21.674 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.674 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.674 Host machine cpu family: x86_64 00:02:21.674 Host machine cpu: x86_64 00:02:21.674 Message: ## Building in Developer Mode ## 00:02:21.674 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.674 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.674 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.674 Program python3 found: YES (/usr/bin/python3) 00:02:21.674 Program cat found: YES (/usr/bin/cat) 00:02:21.674 Compiler for C supports arguments -march=native: YES 00:02:21.674 Checking for size of "void *" : 8 00:02:21.674 Checking for size of "void *" : 8 (cached) 00:02:21.674 Library m found: YES 00:02:21.674 Library numa found: YES 00:02:21.675 Has header "numaif.h" : YES 00:02:21.675 Library fdt found: NO 00:02:21.675 Library execinfo found: NO 00:02:21.675 Has header "execinfo.h" : YES 00:02:21.675 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.675 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.675 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.675 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.675 Run-time dependency openssl found: YES 3.1.1 00:02:21.675 Run-time dependency libpcap found: YES 1.10.4 00:02:21.675 Has header "pcap.h" with dependency libpcap: YES 00:02:21.675 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.675 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.675 Compiler for C supports arguments -Wformat: YES 00:02:21.675 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.675 Compiler for C supports arguments -Wformat-security: NO 00:02:21.675 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.675 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.675 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.675 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.675 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.675 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.675 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.675 Compiler for C supports arguments -Wundef: YES 00:02:21.675 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.675 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.675 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.675 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.675 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.675 Program objdump found: YES (/usr/bin/objdump) 00:02:21.675 Compiler for C supports arguments -mavx512f: YES 00:02:21.675 Checking if "AVX512 checking" compiles: YES 00:02:21.675 Fetching value of define "__SSE4_2__" : 1 00:02:21.675 Fetching value of define "__AES__" : 1 00:02:21.675 Fetching value of define "__AVX__" : 1 00:02:21.675 Fetching value of define "__AVX2__" : 1 00:02:21.675 Fetching value of define "__AVX512BW__" : (undefined) 00:02:21.675 Fetching value of define "__AVX512CD__" : (undefined) 00:02:21.675 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:21.675 Fetching value of define "__AVX512F__" : (undefined) 00:02:21.675 Fetching value of define "__AVX512VL__" : (undefined) 00:02:21.675 Fetching value of define "__PCLMUL__" : 1 00:02:21.675 Fetching value of define "__RDRND__" : 1 00:02:21.675 Fetching value of define "__RDSEED__" : 1 00:02:21.675 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:21.675 Fetching value of define "__znver1__" : (undefined) 00:02:21.675 Fetching value of define "__znver2__" : (undefined) 00:02:21.675 Fetching value of define "__znver3__" : (undefined) 00:02:21.675 Fetching value of define "__znver4__" : (undefined) 00:02:21.675 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.675 Message: lib/log: Defining dependency "log" 00:02:21.675 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.675 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.675 Checking for function "getentropy" : NO 00:02:21.675 Message: lib/eal: Defining dependency "eal" 00:02:21.675 Message: lib/ring: Defining dependency "ring" 00:02:21.675 Message: lib/rcu: Defining dependency "rcu" 00:02:21.675 Message: lib/mempool: Defining dependency "mempool" 00:02:21.675 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.675 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.675 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.675 Compiler for C supports arguments -mpclmul: YES 00:02:21.675 Compiler for C supports arguments -maes: YES 00:02:21.675 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.675 Compiler for C supports arguments -mavx512bw: YES 00:02:21.675 Compiler for C supports arguments -mavx512dq: YES 00:02:21.675 Compiler for C supports arguments -mavx512vl: YES 00:02:21.675 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.675 Compiler for C supports arguments -mavx2: YES 00:02:21.675 Compiler for C supports arguments -mavx: YES 00:02:21.675 Message: lib/net: Defining dependency "net" 00:02:21.675 Message: lib/meter: Defining dependency "meter" 00:02:21.675 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.675 Message: lib/pci: Defining dependency "pci" 00:02:21.675 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.675 Message: lib/hash: Defining dependency "hash" 00:02:21.675 Message: lib/timer: Defining dependency "timer" 00:02:21.675 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.675 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.675 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.675 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.675 Message: lib/power: Defining dependency "power" 00:02:21.675 Message: lib/reorder: Defining dependency "reorder" 00:02:21.675 Message: lib/security: Defining dependency "security" 00:02:21.675 Has header "linux/userfaultfd.h" : YES 00:02:21.675 Has header "linux/vduse.h" : YES 00:02:21.675 Message: lib/vhost: Defining dependency "vhost" 00:02:21.675 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.675 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.675 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.675 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.675 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.675 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.675 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.675 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.675 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.675 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.675 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.675 Configuring doxy-api-html.conf using configuration 00:02:21.675 Configuring doxy-api-man.conf using configuration 00:02:21.675 Program mandb found: YES (/usr/bin/mandb) 00:02:21.675 Program sphinx-build found: NO 00:02:21.675 Configuring rte_build_config.h using configuration 00:02:21.675 Message: 00:02:21.675 ================= 00:02:21.675 Applications Enabled 00:02:21.675 ================= 00:02:21.675 00:02:21.675 apps: 00:02:21.675 00:02:21.675 00:02:21.675 Message: 00:02:21.675 ================= 00:02:21.675 Libraries Enabled 00:02:21.675 ================= 00:02:21.675 00:02:21.675 libs: 00:02:21.675 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.675 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.675 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.675 00:02:21.675 Message: 00:02:21.675 =============== 00:02:21.675 Drivers Enabled 00:02:21.675 =============== 00:02:21.675 00:02:21.675 common: 00:02:21.675 00:02:21.675 bus: 00:02:21.675 pci, vdev, 00:02:21.675 mempool: 00:02:21.675 ring, 00:02:21.675 dma: 00:02:21.675 00:02:21.675 net: 00:02:21.675 00:02:21.675 crypto: 00:02:21.675 00:02:21.675 compress: 00:02:21.675 00:02:21.675 vdpa: 00:02:21.675 00:02:21.675 00:02:21.675 Message: 00:02:21.675 ================= 00:02:21.675 Content Skipped 00:02:21.675 ================= 00:02:21.675 00:02:21.675 apps: 00:02:21.675 dumpcap: explicitly disabled via build config 00:02:21.675 graph: explicitly disabled via build config 00:02:21.675 pdump: explicitly disabled via build config 00:02:21.675 proc-info: explicitly disabled via build config 00:02:21.675 test-acl: explicitly disabled via build config 00:02:21.675 test-bbdev: explicitly disabled via build config 00:02:21.675 test-cmdline: explicitly disabled via build config 00:02:21.675 test-compress-perf: explicitly disabled via build config 00:02:21.675 test-crypto-perf: explicitly disabled via build config 00:02:21.675 test-dma-perf: explicitly disabled via build config 00:02:21.675 test-eventdev: explicitly disabled via build config 00:02:21.675 test-fib: explicitly disabled via build config 00:02:21.675 test-flow-perf: explicitly disabled via build config 00:02:21.675 test-gpudev: explicitly disabled via build config 00:02:21.675 test-mldev: explicitly disabled via build config 00:02:21.675 test-pipeline: explicitly disabled via build config 00:02:21.675 test-pmd: explicitly disabled via build config 00:02:21.675 test-regex: explicitly disabled via build config 00:02:21.675 test-sad: explicitly disabled via build config 00:02:21.675 test-security-perf: explicitly disabled via build config 00:02:21.675 00:02:21.675 libs: 00:02:21.675 metrics: explicitly disabled via build config 00:02:21.675 acl: explicitly disabled via build config 00:02:21.675 bbdev: explicitly disabled via build config 00:02:21.675 bitratestats: explicitly disabled via build config 00:02:21.675 bpf: explicitly disabled via build config 00:02:21.675 cfgfile: explicitly disabled via build config 00:02:21.675 distributor: explicitly disabled via build config 00:02:21.675 efd: explicitly disabled via build config 00:02:21.675 eventdev: explicitly disabled via build config 00:02:21.675 dispatcher: explicitly disabled via build config 00:02:21.675 gpudev: explicitly disabled via build config 00:02:21.675 gro: explicitly disabled via build config 00:02:21.675 gso: explicitly disabled via build config 00:02:21.675 ip_frag: explicitly disabled via build config 00:02:21.675 jobstats: explicitly disabled via build config 00:02:21.675 latencystats: explicitly disabled via build config 00:02:21.675 lpm: explicitly disabled via build config 00:02:21.675 member: explicitly disabled via build config 00:02:21.675 pcapng: explicitly disabled via build config 00:02:21.675 rawdev: explicitly disabled via build config 00:02:21.676 regexdev: explicitly disabled via build config 00:02:21.676 mldev: explicitly disabled via build config 00:02:21.676 rib: explicitly disabled via build config 00:02:21.676 sched: explicitly disabled via build config 00:02:21.676 stack: explicitly disabled via build config 00:02:21.676 ipsec: explicitly disabled via build config 00:02:21.676 pdcp: explicitly disabled via build config 00:02:21.676 fib: explicitly disabled via build config 00:02:21.676 port: explicitly disabled via build config 00:02:21.676 pdump: explicitly disabled via build config 00:02:21.676 table: explicitly disabled via build config 00:02:21.676 pipeline: explicitly disabled via build config 00:02:21.676 graph: explicitly disabled via build config 00:02:21.676 node: explicitly disabled via build config 00:02:21.676 00:02:21.676 drivers: 00:02:21.676 common/cpt: not in enabled drivers build config 00:02:21.676 common/dpaax: not in enabled drivers build config 00:02:21.676 common/iavf: not in enabled drivers build config 00:02:21.676 common/idpf: not in enabled drivers build config 00:02:21.676 common/mvep: not in enabled drivers build config 00:02:21.676 common/octeontx: not in enabled drivers build config 00:02:21.676 bus/auxiliary: not in enabled drivers build config 00:02:21.676 bus/cdx: not in enabled drivers build config 00:02:21.676 bus/dpaa: not in enabled drivers build config 00:02:21.676 bus/fslmc: not in enabled drivers build config 00:02:21.676 bus/ifpga: not in enabled drivers build config 00:02:21.676 bus/platform: not in enabled drivers build config 00:02:21.676 bus/vmbus: not in enabled drivers build config 00:02:21.676 common/cnxk: not in enabled drivers build config 00:02:21.676 common/mlx5: not in enabled drivers build config 00:02:21.676 common/nfp: not in enabled drivers build config 00:02:21.676 common/qat: not in enabled drivers build config 00:02:21.676 common/sfc_efx: not in enabled drivers build config 00:02:21.676 mempool/bucket: not in enabled drivers build config 00:02:21.676 mempool/cnxk: not in enabled drivers build config 00:02:21.676 mempool/dpaa: not in enabled drivers build config 00:02:21.676 mempool/dpaa2: not in enabled drivers build config 00:02:21.676 mempool/octeontx: not in enabled drivers build config 00:02:21.676 mempool/stack: not in enabled drivers build config 00:02:21.676 dma/cnxk: not in enabled drivers build config 00:02:21.676 dma/dpaa: not in enabled drivers build config 00:02:21.676 dma/dpaa2: not in enabled drivers build config 00:02:21.676 dma/hisilicon: not in enabled drivers build config 00:02:21.676 dma/idxd: not in enabled drivers build config 00:02:21.676 dma/ioat: not in enabled drivers build config 00:02:21.676 dma/skeleton: not in enabled drivers build config 00:02:21.676 net/af_packet: not in enabled drivers build config 00:02:21.676 net/af_xdp: not in enabled drivers build config 00:02:21.676 net/ark: not in enabled drivers build config 00:02:21.676 net/atlantic: not in enabled drivers build config 00:02:21.676 net/avp: not in enabled drivers build config 00:02:21.676 net/axgbe: not in enabled drivers build config 00:02:21.676 net/bnx2x: not in enabled drivers build config 00:02:21.676 net/bnxt: not in enabled drivers build config 00:02:21.676 net/bonding: not in enabled drivers build config 00:02:21.676 net/cnxk: not in enabled drivers build config 00:02:21.676 net/cpfl: not in enabled drivers build config 00:02:21.676 net/cxgbe: not in enabled drivers build config 00:02:21.676 net/dpaa: not in enabled drivers build config 00:02:21.676 net/dpaa2: not in enabled drivers build config 00:02:21.676 net/e1000: not in enabled drivers build config 00:02:21.676 net/ena: not in enabled drivers build config 00:02:21.676 net/enetc: not in enabled drivers build config 00:02:21.676 net/enetfec: not in enabled drivers build config 00:02:21.676 net/enic: not in enabled drivers build config 00:02:21.676 net/failsafe: not in enabled drivers build config 00:02:21.676 net/fm10k: not in enabled drivers build config 00:02:21.676 net/gve: not in enabled drivers build config 00:02:21.676 net/hinic: not in enabled drivers build config 00:02:21.676 net/hns3: not in enabled drivers build config 00:02:21.676 net/i40e: not in enabled drivers build config 00:02:21.676 net/iavf: not in enabled drivers build config 00:02:21.676 net/ice: not in enabled drivers build config 00:02:21.676 net/idpf: not in enabled drivers build config 00:02:21.676 net/igc: not in enabled drivers build config 00:02:21.676 net/ionic: not in enabled drivers build config 00:02:21.676 net/ipn3ke: not in enabled drivers build config 00:02:21.676 net/ixgbe: not in enabled drivers build config 00:02:21.676 net/mana: not in enabled drivers build config 00:02:21.676 net/memif: not in enabled drivers build config 00:02:21.676 net/mlx4: not in enabled drivers build config 00:02:21.676 net/mlx5: not in enabled drivers build config 00:02:21.676 net/mvneta: not in enabled drivers build config 00:02:21.676 net/mvpp2: not in enabled drivers build config 00:02:21.676 net/netvsc: not in enabled drivers build config 00:02:21.676 net/nfb: not in enabled drivers build config 00:02:21.676 net/nfp: not in enabled drivers build config 00:02:21.676 net/ngbe: not in enabled drivers build config 00:02:21.676 net/null: not in enabled drivers build config 00:02:21.676 net/octeontx: not in enabled drivers build config 00:02:21.676 net/octeon_ep: not in enabled drivers build config 00:02:21.676 net/pcap: not in enabled drivers build config 00:02:21.676 net/pfe: not in enabled drivers build config 00:02:21.676 net/qede: not in enabled drivers build config 00:02:21.676 net/ring: not in enabled drivers build config 00:02:21.676 net/sfc: not in enabled drivers build config 00:02:21.676 net/softnic: not in enabled drivers build config 00:02:21.676 net/tap: not in enabled drivers build config 00:02:21.676 net/thunderx: not in enabled drivers build config 00:02:21.676 net/txgbe: not in enabled drivers build config 00:02:21.676 net/vdev_netvsc: not in enabled drivers build config 00:02:21.676 net/vhost: not in enabled drivers build config 00:02:21.676 net/virtio: not in enabled drivers build config 00:02:21.676 net/vmxnet3: not in enabled drivers build config 00:02:21.676 raw/*: missing internal dependency, "rawdev" 00:02:21.676 crypto/armv8: not in enabled drivers build config 00:02:21.676 crypto/bcmfs: not in enabled drivers build config 00:02:21.676 crypto/caam_jr: not in enabled drivers build config 00:02:21.676 crypto/ccp: not in enabled drivers build config 00:02:21.676 crypto/cnxk: not in enabled drivers build config 00:02:21.676 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.676 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.676 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.676 crypto/mlx5: not in enabled drivers build config 00:02:21.676 crypto/mvsam: not in enabled drivers build config 00:02:21.676 crypto/nitrox: not in enabled drivers build config 00:02:21.676 crypto/null: not in enabled drivers build config 00:02:21.676 crypto/octeontx: not in enabled drivers build config 00:02:21.676 crypto/openssl: not in enabled drivers build config 00:02:21.676 crypto/scheduler: not in enabled drivers build config 00:02:21.676 crypto/uadk: not in enabled drivers build config 00:02:21.676 crypto/virtio: not in enabled drivers build config 00:02:21.676 compress/isal: not in enabled drivers build config 00:02:21.676 compress/mlx5: not in enabled drivers build config 00:02:21.676 compress/octeontx: not in enabled drivers build config 00:02:21.676 compress/zlib: not in enabled drivers build config 00:02:21.676 regex/*: missing internal dependency, "regexdev" 00:02:21.676 ml/*: missing internal dependency, "mldev" 00:02:21.676 vdpa/ifc: not in enabled drivers build config 00:02:21.676 vdpa/mlx5: not in enabled drivers build config 00:02:21.676 vdpa/nfp: not in enabled drivers build config 00:02:21.676 vdpa/sfc: not in enabled drivers build config 00:02:21.676 event/*: missing internal dependency, "eventdev" 00:02:21.676 baseband/*: missing internal dependency, "bbdev" 00:02:21.676 gpu/*: missing internal dependency, "gpudev" 00:02:21.676 00:02:21.676 00:02:21.676 Build targets in project: 85 00:02:21.676 00:02:21.676 DPDK 23.11.0 00:02:21.676 00:02:21.676 User defined options 00:02:21.676 buildtype : debug 00:02:21.676 default_library : shared 00:02:21.676 libdir : lib 00:02:21.676 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:21.676 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:21.676 c_link_args : 00:02:21.676 cpu_instruction_set: native 00:02:21.676 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:21.676 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:21.676 enable_docs : false 00:02:21.676 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:21.676 enable_kmods : false 00:02:21.676 tests : false 00:02:21.676 00:02:21.676 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.676 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:21.676 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.676 [2/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.676 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.676 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.676 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.676 [6/265] Linking static target lib/librte_log.a 00:02:21.676 [7/265] Linking static target lib/librte_kvargs.a 00:02:21.676 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.676 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.676 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:22.244 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.244 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.503 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.503 [14/265] Linking static target lib/librte_telemetry.a 00:02:22.503 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.503 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.503 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.503 [18/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.503 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.503 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.503 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.503 [22/265] Linking target lib/librte_log.so.24.0 00:02:22.763 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.763 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:23.022 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:23.023 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:23.023 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:23.281 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:23.281 [29/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.281 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:23.281 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:23.281 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:23.281 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:23.281 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.541 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:23.541 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:23.541 [37/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:23.541 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.800 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.800 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.800 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.800 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:24.058 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.058 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.058 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.058 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.317 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.317 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.317 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.317 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:24.575 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.575 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.834 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.834 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.834 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.834 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.834 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:25.093 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:25.093 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:25.093 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:25.093 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:25.353 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:25.353 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:25.353 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:25.612 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:25.612 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:25.612 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:25.871 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:25.871 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:25.871 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:25.871 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:25.871 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:25.871 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:25.871 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:26.130 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:26.130 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:26.130 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:26.389 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:26.389 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:26.648 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:26.648 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:26.648 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:26.648 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:26.907 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:26.907 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:26.907 [86/265] Linking static target lib/librte_ring.a 00:02:26.907 [87/265] Linking static target lib/librte_eal.a 00:02:26.907 [88/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:26.907 [89/265] Linking static target lib/librte_rcu.a 00:02:27.166 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.166 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.426 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.426 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.426 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.426 [95/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.426 [96/265] Linking static target lib/librte_mempool.a 00:02:27.685 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.685 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.944 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:27.944 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:28.203 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.203 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.203 [103/265] Linking static target lib/librte_mbuf.a 00:02:28.203 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:28.462 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:28.462 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.462 [107/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.462 [108/265] Linking static target lib/librte_meter.a 00:02:28.462 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.721 [110/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.721 [111/265] Linking static target lib/librte_net.a 00:02:28.721 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.980 [113/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.980 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:29.240 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:29.240 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:29.240 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.499 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.499 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.066 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.372 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.372 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:30.372 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:30.372 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:30.660 [125/265] Linking static target lib/librte_pci.a 00:02:30.660 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:30.660 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:30.660 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:30.920 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:30.920 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:30.920 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:30.920 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:30.920 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.920 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:30.920 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:30.920 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.179 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.179 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.179 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.179 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.179 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.179 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.438 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.697 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:31.697 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:31.957 [146/265] Linking static target lib/librte_cmdline.a 00:02:31.957 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:31.957 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:31.957 [149/265] Linking static target lib/librte_timer.a 00:02:32.216 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:32.216 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.216 [152/265] Linking static target lib/librte_ethdev.a 00:02:32.216 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:32.216 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.216 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:32.216 [156/265] Linking static target lib/librte_compressdev.a 00:02:32.475 [157/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:32.475 [158/265] Linking static target lib/librte_hash.a 00:02:32.740 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:32.740 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.740 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:32.999 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:32.999 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:32.999 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:33.257 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:33.257 [166/265] Linking static target lib/librte_dmadev.a 00:02:33.257 [167/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.257 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:33.257 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:33.516 [170/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:33.516 [171/265] Linking static target lib/librte_cryptodev.a 00:02:33.516 [172/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:33.516 [173/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.516 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.516 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:33.774 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.774 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:34.033 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:34.033 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.033 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:34.292 [181/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:34.551 [182/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:34.551 [183/265] Linking static target lib/librte_reorder.a 00:02:34.551 [184/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:34.551 [185/265] Linking static target lib/librte_power.a 00:02:34.551 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:34.551 [187/265] Linking static target lib/librte_security.a 00:02:34.551 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:34.551 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:34.810 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.070 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.070 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:35.328 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.587 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:35.846 [195/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.846 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.846 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:35.846 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:35.846 [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.105 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:36.105 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:36.364 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:36.364 [203/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:36.623 [204/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:36.623 [205/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:36.623 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:36.623 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:36.623 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:36.623 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:36.883 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:36.883 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.883 [212/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.883 [213/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.883 [214/265] Linking static target drivers/librte_bus_vdev.a 00:02:36.883 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.883 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.883 [217/265] Linking static target drivers/librte_bus_pci.a 00:02:36.883 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:36.883 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:37.142 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.142 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:37.142 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.142 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:37.142 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:37.142 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.711 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.711 [227/265] Linking static target lib/librte_vhost.a 00:02:38.648 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.648 [229/265] Linking target lib/librte_eal.so.24.0 00:02:38.648 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:38.908 [231/265] Linking target lib/librte_ring.so.24.0 00:02:38.908 [232/265] Linking target lib/librte_pci.so.24.0 00:02:38.908 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:38.908 [234/265] Linking target lib/librte_dmadev.so.24.0 00:02:38.908 [235/265] Linking target lib/librte_meter.so.24.0 00:02:38.908 [236/265] Linking target lib/librte_timer.so.24.0 00:02:38.908 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:38.908 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:38.908 [239/265] Linking target lib/librte_mempool.so.24.0 00:02:38.908 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:38.908 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:38.908 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:38.908 [243/265] Linking target lib/librte_rcu.so.24.0 00:02:38.908 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:39.166 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:39.166 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:39.166 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:39.166 [248/265] Linking target lib/librte_mbuf.so.24.0 00:02:39.166 [249/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.426 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:39.426 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:02:39.426 [252/265] Linking target lib/librte_compressdev.so.24.0 00:02:39.426 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:39.426 [254/265] Linking target lib/librte_net.so.24.0 00:02:39.426 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:39.426 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:39.685 [257/265] Linking target lib/librte_hash.so.24.0 00:02:39.685 [258/265] Linking target lib/librte_security.so.24.0 00:02:39.685 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:39.685 [260/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.685 [261/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:39.685 [262/265] Linking target lib/librte_ethdev.so.24.0 00:02:39.944 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:39.944 [264/265] Linking target lib/librte_power.so.24.0 00:02:39.944 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:39.944 INFO: autodetecting backend as ninja 00:02:39.944 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:41.321 CC lib/log/log_flags.o 00:02:41.321 CC lib/log/log.o 00:02:41.321 CC lib/log/log_deprecated.o 00:02:41.321 CC lib/ut_mock/mock.o 00:02:41.321 CC lib/ut/ut.o 00:02:41.321 LIB libspdk_ut_mock.a 00:02:41.321 SO libspdk_ut_mock.so.5.0 00:02:41.321 LIB libspdk_ut.a 00:02:41.321 LIB libspdk_log.a 00:02:41.321 SO libspdk_ut.so.1.0 00:02:41.321 SO libspdk_log.so.6.1 00:02:41.321 SYMLINK libspdk_ut_mock.so 00:02:41.321 SYMLINK libspdk_ut.so 00:02:41.580 SYMLINK libspdk_log.so 00:02:41.580 CC lib/util/bit_array.o 00:02:41.580 CC lib/util/base64.o 00:02:41.580 CC lib/util/cpuset.o 00:02:41.580 CC lib/util/crc32c.o 00:02:41.580 CC lib/util/crc32.o 00:02:41.580 CC lib/util/crc16.o 00:02:41.580 CC lib/dma/dma.o 00:02:41.580 CC lib/ioat/ioat.o 00:02:41.580 CXX lib/trace_parser/trace.o 00:02:41.580 CC lib/vfio_user/host/vfio_user_pci.o 00:02:41.838 CC lib/vfio_user/host/vfio_user.o 00:02:41.838 CC lib/util/crc32_ieee.o 00:02:41.838 CC lib/util/crc64.o 00:02:41.838 CC lib/util/dif.o 00:02:41.838 CC lib/util/fd.o 00:02:41.838 LIB libspdk_dma.a 00:02:41.838 CC lib/util/file.o 00:02:41.838 SO libspdk_dma.so.3.0 00:02:41.838 SYMLINK libspdk_dma.so 00:02:41.838 CC lib/util/hexlify.o 00:02:41.838 CC lib/util/iov.o 00:02:41.838 CC lib/util/math.o 00:02:41.838 LIB libspdk_ioat.a 00:02:42.097 CC lib/util/pipe.o 00:02:42.097 SO libspdk_ioat.so.6.0 00:02:42.097 CC lib/util/strerror_tls.o 00:02:42.097 LIB libspdk_vfio_user.a 00:02:42.097 CC lib/util/string.o 00:02:42.097 SYMLINK libspdk_ioat.so 00:02:42.097 CC lib/util/uuid.o 00:02:42.097 SO libspdk_vfio_user.so.4.0 00:02:42.097 CC lib/util/fd_group.o 00:02:42.097 CC lib/util/xor.o 00:02:42.097 SYMLINK libspdk_vfio_user.so 00:02:42.097 CC lib/util/zipf.o 00:02:42.356 LIB libspdk_util.a 00:02:42.356 SO libspdk_util.so.8.0 00:02:42.614 SYMLINK libspdk_util.so 00:02:42.614 LIB libspdk_trace_parser.a 00:02:42.614 CC lib/json/json_util.o 00:02:42.614 CC lib/json/json_parse.o 00:02:42.614 CC lib/json/json_write.o 00:02:42.614 CC lib/vmd/vmd.o 00:02:42.614 CC lib/conf/conf.o 00:02:42.614 CC lib/rdma/common.o 00:02:42.614 CC lib/rdma/rdma_verbs.o 00:02:42.614 CC lib/env_dpdk/env.o 00:02:42.614 CC lib/idxd/idxd.o 00:02:42.614 SO libspdk_trace_parser.so.4.0 00:02:42.873 SYMLINK libspdk_trace_parser.so 00:02:42.873 CC lib/env_dpdk/memory.o 00:02:42.873 CC lib/env_dpdk/pci.o 00:02:42.873 LIB libspdk_conf.a 00:02:42.873 CC lib/env_dpdk/init.o 00:02:42.873 CC lib/env_dpdk/threads.o 00:02:42.873 SO libspdk_conf.so.5.0 00:02:42.873 LIB libspdk_json.a 00:02:42.873 LIB libspdk_rdma.a 00:02:42.873 SYMLINK libspdk_conf.so 00:02:42.873 CC lib/vmd/led.o 00:02:42.873 SO libspdk_json.so.5.1 00:02:42.873 SO libspdk_rdma.so.5.0 00:02:42.873 SYMLINK libspdk_json.so 00:02:42.873 SYMLINK libspdk_rdma.so 00:02:42.873 CC lib/env_dpdk/pci_ioat.o 00:02:42.873 CC lib/env_dpdk/pci_virtio.o 00:02:42.873 CC lib/env_dpdk/pci_vmd.o 00:02:43.132 CC lib/env_dpdk/pci_idxd.o 00:02:43.132 CC lib/env_dpdk/pci_event.o 00:02:43.132 CC lib/env_dpdk/sigbus_handler.o 00:02:43.132 CC lib/idxd/idxd_user.o 00:02:43.132 CC lib/env_dpdk/pci_dpdk.o 00:02:43.132 CC lib/idxd/idxd_kernel.o 00:02:43.132 LIB libspdk_vmd.a 00:02:43.132 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.132 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.132 SO libspdk_vmd.so.5.0 00:02:43.391 SYMLINK libspdk_vmd.so 00:02:43.391 LIB libspdk_idxd.a 00:02:43.391 CC lib/jsonrpc/jsonrpc_server.o 00:02:43.391 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:43.391 CC lib/jsonrpc/jsonrpc_client.o 00:02:43.391 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:43.391 SO libspdk_idxd.so.11.0 00:02:43.391 SYMLINK libspdk_idxd.so 00:02:43.650 LIB libspdk_jsonrpc.a 00:02:43.650 SO libspdk_jsonrpc.so.5.1 00:02:43.650 SYMLINK libspdk_jsonrpc.so 00:02:43.909 CC lib/rpc/rpc.o 00:02:43.909 LIB libspdk_env_dpdk.a 00:02:43.909 SO libspdk_env_dpdk.so.13.0 00:02:44.167 LIB libspdk_rpc.a 00:02:44.167 SO libspdk_rpc.so.5.0 00:02:44.167 SYMLINK libspdk_env_dpdk.so 00:02:44.167 SYMLINK libspdk_rpc.so 00:02:44.426 CC lib/notify/notify.o 00:02:44.426 CC lib/notify/notify_rpc.o 00:02:44.426 CC lib/sock/sock_rpc.o 00:02:44.426 CC lib/sock/sock.o 00:02:44.426 CC lib/trace/trace.o 00:02:44.426 CC lib/trace/trace_flags.o 00:02:44.426 CC lib/trace/trace_rpc.o 00:02:44.426 LIB libspdk_notify.a 00:02:44.426 LIB libspdk_trace.a 00:02:44.426 SO libspdk_notify.so.5.0 00:02:44.684 SO libspdk_trace.so.9.0 00:02:44.684 SYMLINK libspdk_notify.so 00:02:44.684 SYMLINK libspdk_trace.so 00:02:44.684 LIB libspdk_sock.a 00:02:44.684 SO libspdk_sock.so.8.0 00:02:44.943 CC lib/thread/thread.o 00:02:44.943 CC lib/thread/iobuf.o 00:02:44.943 SYMLINK libspdk_sock.so 00:02:44.943 CC lib/nvme/nvme_ctrlr.o 00:02:44.943 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:44.943 CC lib/nvme/nvme_ns_cmd.o 00:02:44.943 CC lib/nvme/nvme_fabric.o 00:02:44.943 CC lib/nvme/nvme_pcie_common.o 00:02:44.943 CC lib/nvme/nvme_ns.o 00:02:44.943 CC lib/nvme/nvme_pcie.o 00:02:44.943 CC lib/nvme/nvme_qpair.o 00:02:45.202 CC lib/nvme/nvme.o 00:02:45.770 CC lib/nvme/nvme_quirks.o 00:02:45.770 CC lib/nvme/nvme_transport.o 00:02:45.770 CC lib/nvme/nvme_discovery.o 00:02:45.770 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:46.029 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:46.029 CC lib/nvme/nvme_tcp.o 00:02:46.029 CC lib/nvme/nvme_opal.o 00:02:46.029 CC lib/nvme/nvme_io_msg.o 00:02:46.288 CC lib/nvme/nvme_poll_group.o 00:02:46.288 LIB libspdk_thread.a 00:02:46.288 SO libspdk_thread.so.9.0 00:02:46.547 SYMLINK libspdk_thread.so 00:02:46.547 CC lib/nvme/nvme_zns.o 00:02:46.547 CC lib/nvme/nvme_cuse.o 00:02:46.547 CC lib/nvme/nvme_vfio_user.o 00:02:46.547 CC lib/nvme/nvme_rdma.o 00:02:46.547 CC lib/accel/accel.o 00:02:46.547 CC lib/blob/blobstore.o 00:02:46.806 CC lib/blob/request.o 00:02:47.066 CC lib/init/json_config.o 00:02:47.066 CC lib/init/subsystem.o 00:02:47.066 CC lib/virtio/virtio.o 00:02:47.066 CC lib/virtio/virtio_vhost_user.o 00:02:47.066 CC lib/vfu_tgt/tgt_endpoint.o 00:02:47.325 CC lib/init/subsystem_rpc.o 00:02:47.325 CC lib/accel/accel_rpc.o 00:02:47.325 CC lib/accel/accel_sw.o 00:02:47.325 CC lib/init/rpc.o 00:02:47.325 CC lib/virtio/virtio_vfio_user.o 00:02:47.325 CC lib/virtio/virtio_pci.o 00:02:47.325 CC lib/vfu_tgt/tgt_rpc.o 00:02:47.325 CC lib/blob/zeroes.o 00:02:47.584 CC lib/blob/blob_bs_dev.o 00:02:47.584 LIB libspdk_init.a 00:02:47.584 LIB libspdk_accel.a 00:02:47.584 SO libspdk_init.so.4.0 00:02:47.584 LIB libspdk_vfu_tgt.a 00:02:47.584 SO libspdk_accel.so.14.0 00:02:47.584 SYMLINK libspdk_init.so 00:02:47.584 SO libspdk_vfu_tgt.so.2.0 00:02:47.584 SYMLINK libspdk_accel.so 00:02:47.584 SYMLINK libspdk_vfu_tgt.so 00:02:47.584 LIB libspdk_virtio.a 00:02:47.843 CC lib/event/app.o 00:02:47.843 CC lib/event/reactor.o 00:02:47.843 CC lib/event/log_rpc.o 00:02:47.843 CC lib/event/scheduler_static.o 00:02:47.843 CC lib/event/app_rpc.o 00:02:47.843 LIB libspdk_nvme.a 00:02:47.843 SO libspdk_virtio.so.6.0 00:02:47.843 CC lib/bdev/bdev.o 00:02:47.843 CC lib/bdev/bdev_rpc.o 00:02:47.843 SYMLINK libspdk_virtio.so 00:02:47.843 CC lib/bdev/bdev_zone.o 00:02:47.843 CC lib/bdev/part.o 00:02:47.843 CC lib/bdev/scsi_nvme.o 00:02:47.843 SO libspdk_nvme.so.12.0 00:02:48.102 SYMLINK libspdk_nvme.so 00:02:48.102 LIB libspdk_event.a 00:02:48.102 SO libspdk_event.so.12.0 00:02:48.373 SYMLINK libspdk_event.so 00:02:49.313 LIB libspdk_blob.a 00:02:49.313 SO libspdk_blob.so.10.1 00:02:49.572 SYMLINK libspdk_blob.so 00:02:49.572 CC lib/blobfs/blobfs.o 00:02:49.572 CC lib/blobfs/tree.o 00:02:49.572 CC lib/lvol/lvol.o 00:02:50.140 LIB libspdk_bdev.a 00:02:50.140 SO libspdk_bdev.so.14.0 00:02:50.399 SYMLINK libspdk_bdev.so 00:02:50.399 CC lib/nvmf/ctrlr.o 00:02:50.399 CC lib/nvmf/ctrlr_discovery.o 00:02:50.399 CC lib/nvmf/subsystem.o 00:02:50.399 CC lib/nvmf/ctrlr_bdev.o 00:02:50.399 CC lib/ublk/ublk.o 00:02:50.399 CC lib/scsi/dev.o 00:02:50.399 CC lib/ftl/ftl_core.o 00:02:50.399 CC lib/nbd/nbd.o 00:02:50.399 LIB libspdk_blobfs.a 00:02:50.658 SO libspdk_blobfs.so.9.0 00:02:50.658 LIB libspdk_lvol.a 00:02:50.658 SO libspdk_lvol.so.9.1 00:02:50.658 SYMLINK libspdk_blobfs.so 00:02:50.658 CC lib/ublk/ublk_rpc.o 00:02:50.658 SYMLINK libspdk_lvol.so 00:02:50.658 CC lib/nbd/nbd_rpc.o 00:02:50.658 CC lib/scsi/lun.o 00:02:50.918 CC lib/nvmf/nvmf.o 00:02:50.918 CC lib/ftl/ftl_init.o 00:02:50.918 CC lib/nvmf/nvmf_rpc.o 00:02:50.918 CC lib/scsi/port.o 00:02:50.918 LIB libspdk_nbd.a 00:02:50.918 CC lib/nvmf/transport.o 00:02:50.918 SO libspdk_nbd.so.6.0 00:02:50.918 CC lib/ftl/ftl_layout.o 00:02:51.177 SYMLINK libspdk_nbd.so 00:02:51.177 CC lib/ftl/ftl_debug.o 00:02:51.177 CC lib/scsi/scsi.o 00:02:51.177 LIB libspdk_ublk.a 00:02:51.177 CC lib/nvmf/tcp.o 00:02:51.177 SO libspdk_ublk.so.2.0 00:02:51.177 SYMLINK libspdk_ublk.so 00:02:51.177 CC lib/nvmf/vfio_user.o 00:02:51.177 CC lib/scsi/scsi_bdev.o 00:02:51.177 CC lib/scsi/scsi_pr.o 00:02:51.436 CC lib/ftl/ftl_io.o 00:02:51.436 CC lib/nvmf/rdma.o 00:02:51.695 CC lib/scsi/scsi_rpc.o 00:02:51.695 CC lib/scsi/task.o 00:02:51.695 CC lib/ftl/ftl_sb.o 00:02:51.695 CC lib/ftl/ftl_l2p.o 00:02:51.695 CC lib/ftl/ftl_l2p_flat.o 00:02:51.695 CC lib/ftl/ftl_nv_cache.o 00:02:51.695 CC lib/ftl/ftl_band.o 00:02:51.954 LIB libspdk_scsi.a 00:02:51.954 CC lib/ftl/ftl_band_ops.o 00:02:51.954 SO libspdk_scsi.so.8.0 00:02:51.954 CC lib/ftl/ftl_writer.o 00:02:51.954 CC lib/ftl/ftl_rq.o 00:02:51.954 SYMLINK libspdk_scsi.so 00:02:52.268 CC lib/iscsi/conn.o 00:02:52.268 CC lib/ftl/ftl_reloc.o 00:02:52.268 CC lib/ftl/ftl_l2p_cache.o 00:02:52.268 CC lib/vhost/vhost.o 00:02:52.268 CC lib/vhost/vhost_rpc.o 00:02:52.268 CC lib/vhost/vhost_scsi.o 00:02:52.527 CC lib/vhost/vhost_blk.o 00:02:52.528 CC lib/ftl/ftl_p2l.o 00:02:52.528 CC lib/ftl/mngt/ftl_mngt.o 00:02:52.797 CC lib/iscsi/init_grp.o 00:02:52.797 CC lib/iscsi/iscsi.o 00:02:52.797 CC lib/iscsi/md5.o 00:02:52.797 CC lib/iscsi/param.o 00:02:52.797 CC lib/iscsi/portal_grp.o 00:02:52.797 CC lib/vhost/rte_vhost_user.o 00:02:53.064 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.064 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.064 CC lib/iscsi/tgt_node.o 00:02:53.064 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.064 CC lib/iscsi/iscsi_subsystem.o 00:02:53.064 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.064 CC lib/iscsi/iscsi_rpc.o 00:02:53.064 CC lib/iscsi/task.o 00:02:53.323 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.323 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.323 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.323 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.323 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.323 LIB libspdk_nvmf.a 00:02:53.323 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.581 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.581 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.581 CC lib/ftl/utils/ftl_conf.o 00:02:53.581 CC lib/ftl/utils/ftl_md.o 00:02:53.581 SO libspdk_nvmf.so.17.0 00:02:53.581 CC lib/ftl/utils/ftl_mempool.o 00:02:53.581 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.581 CC lib/ftl/utils/ftl_property.o 00:02:53.840 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.840 SYMLINK libspdk_nvmf.so 00:02:53.840 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.840 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.840 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.840 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.840 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.840 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.840 LIB libspdk_vhost.a 00:02:53.840 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.840 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.840 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.840 CC lib/ftl/base/ftl_base_dev.o 00:02:53.840 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.098 SO libspdk_vhost.so.7.1 00:02:54.098 CC lib/ftl/ftl_trace.o 00:02:54.098 SYMLINK libspdk_vhost.so 00:02:54.098 LIB libspdk_iscsi.a 00:02:54.357 SO libspdk_iscsi.so.7.0 00:02:54.357 LIB libspdk_ftl.a 00:02:54.357 SYMLINK libspdk_iscsi.so 00:02:54.357 SO libspdk_ftl.so.8.0 00:02:54.616 SYMLINK libspdk_ftl.so 00:02:54.875 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.875 CC module/vfu_device/vfu_virtio.o 00:02:54.875 CC module/blob/bdev/blob_bdev.o 00:02:54.875 CC module/accel/ioat/accel_ioat.o 00:02:54.875 CC module/accel/dsa/accel_dsa.o 00:02:54.875 CC module/sock/posix/posix.o 00:02:54.875 CC module/accel/error/accel_error.o 00:02:54.875 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:54.875 CC module/accel/iaa/accel_iaa.o 00:02:54.875 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:55.134 LIB libspdk_env_dpdk_rpc.a 00:02:55.134 SO libspdk_env_dpdk_rpc.so.5.0 00:02:55.134 CC module/accel/ioat/accel_ioat_rpc.o 00:02:55.134 SYMLINK libspdk_env_dpdk_rpc.so 00:02:55.134 CC module/accel/error/accel_error_rpc.o 00:02:55.134 CC module/accel/dsa/accel_dsa_rpc.o 00:02:55.134 LIB libspdk_scheduler_dynamic.a 00:02:55.134 CC module/accel/iaa/accel_iaa_rpc.o 00:02:55.134 LIB libspdk_scheduler_dpdk_governor.a 00:02:55.134 SO libspdk_scheduler_dynamic.so.3.0 00:02:55.134 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:55.134 LIB libspdk_blob_bdev.a 00:02:55.134 CC module/vfu_device/vfu_virtio_blk.o 00:02:55.134 SYMLINK libspdk_scheduler_dynamic.so 00:02:55.134 SO libspdk_blob_bdev.so.10.1 00:02:55.393 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:55.393 CC module/vfu_device/vfu_virtio_scsi.o 00:02:55.393 SYMLINK libspdk_blob_bdev.so 00:02:55.393 LIB libspdk_accel_ioat.a 00:02:55.393 CC module/vfu_device/vfu_virtio_rpc.o 00:02:55.393 LIB libspdk_accel_error.a 00:02:55.393 LIB libspdk_accel_dsa.a 00:02:55.393 LIB libspdk_accel_iaa.a 00:02:55.393 SO libspdk_accel_ioat.so.5.0 00:02:55.393 SO libspdk_accel_error.so.1.0 00:02:55.393 SO libspdk_accel_dsa.so.4.0 00:02:55.393 SO libspdk_accel_iaa.so.2.0 00:02:55.393 CC module/scheduler/gscheduler/gscheduler.o 00:02:55.393 SYMLINK libspdk_accel_ioat.so 00:02:55.393 SYMLINK libspdk_accel_error.so 00:02:55.393 SYMLINK libspdk_accel_dsa.so 00:02:55.393 SYMLINK libspdk_accel_iaa.so 00:02:55.393 LIB libspdk_scheduler_gscheduler.a 00:02:55.652 SO libspdk_scheduler_gscheduler.so.3.0 00:02:55.652 CC module/bdev/delay/vbdev_delay.o 00:02:55.652 CC module/bdev/error/vbdev_error.o 00:02:55.652 CC module/blobfs/bdev/blobfs_bdev.o 00:02:55.652 CC module/bdev/gpt/gpt.o 00:02:55.652 SYMLINK libspdk_scheduler_gscheduler.so 00:02:55.652 CC module/bdev/gpt/vbdev_gpt.o 00:02:55.652 CC module/bdev/lvol/vbdev_lvol.o 00:02:55.652 CC module/bdev/malloc/bdev_malloc.o 00:02:55.652 LIB libspdk_vfu_device.a 00:02:55.652 CC module/bdev/null/bdev_null.o 00:02:55.652 LIB libspdk_sock_posix.a 00:02:55.652 SO libspdk_vfu_device.so.2.0 00:02:55.652 SO libspdk_sock_posix.so.5.0 00:02:55.652 SYMLINK libspdk_vfu_device.so 00:02:55.652 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:55.652 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:55.911 SYMLINK libspdk_sock_posix.so 00:02:55.911 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:55.911 CC module/bdev/null/bdev_null_rpc.o 00:02:55.911 CC module/bdev/error/vbdev_error_rpc.o 00:02:55.911 LIB libspdk_bdev_gpt.a 00:02:55.911 SO libspdk_bdev_gpt.so.5.0 00:02:55.911 LIB libspdk_blobfs_bdev.a 00:02:55.911 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:55.911 LIB libspdk_bdev_null.a 00:02:55.911 LIB libspdk_bdev_malloc.a 00:02:55.911 SYMLINK libspdk_bdev_gpt.so 00:02:55.911 SO libspdk_blobfs_bdev.so.5.0 00:02:55.911 SO libspdk_bdev_null.so.5.0 00:02:55.911 SO libspdk_bdev_malloc.so.5.0 00:02:55.911 CC module/bdev/nvme/bdev_nvme.o 00:02:56.169 LIB libspdk_bdev_error.a 00:02:56.169 CC module/bdev/passthru/vbdev_passthru.o 00:02:56.169 SYMLINK libspdk_blobfs_bdev.so 00:02:56.169 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:56.169 SO libspdk_bdev_error.so.5.0 00:02:56.169 SYMLINK libspdk_bdev_null.so 00:02:56.169 SYMLINK libspdk_bdev_malloc.so 00:02:56.169 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:56.169 CC module/bdev/nvme/nvme_rpc.o 00:02:56.169 LIB libspdk_bdev_lvol.a 00:02:56.169 CC module/bdev/raid/bdev_raid.o 00:02:56.169 LIB libspdk_bdev_delay.a 00:02:56.169 SYMLINK libspdk_bdev_error.so 00:02:56.169 SO libspdk_bdev_lvol.so.5.0 00:02:56.169 CC module/bdev/raid/bdev_raid_rpc.o 00:02:56.169 SO libspdk_bdev_delay.so.5.0 00:02:56.169 CC module/bdev/split/vbdev_split.o 00:02:56.169 SYMLINK libspdk_bdev_lvol.so 00:02:56.169 CC module/bdev/split/vbdev_split_rpc.o 00:02:56.169 SYMLINK libspdk_bdev_delay.so 00:02:56.169 CC module/bdev/nvme/bdev_mdns_client.o 00:02:56.169 CC module/bdev/nvme/vbdev_opal.o 00:02:56.428 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:56.428 LIB libspdk_bdev_passthru.a 00:02:56.428 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:56.428 SO libspdk_bdev_passthru.so.5.0 00:02:56.428 LIB libspdk_bdev_split.a 00:02:56.428 SYMLINK libspdk_bdev_passthru.so 00:02:56.428 SO libspdk_bdev_split.so.5.0 00:02:56.428 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:56.686 SYMLINK libspdk_bdev_split.so 00:02:56.686 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:56.686 CC module/bdev/raid/bdev_raid_sb.o 00:02:56.686 CC module/bdev/aio/bdev_aio.o 00:02:56.686 CC module/bdev/raid/raid0.o 00:02:56.686 CC module/bdev/iscsi/bdev_iscsi.o 00:02:56.686 CC module/bdev/ftl/bdev_ftl.o 00:02:56.686 CC module/bdev/raid/raid1.o 00:02:56.686 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:56.686 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:56.945 LIB libspdk_bdev_zone_block.a 00:02:56.945 SO libspdk_bdev_zone_block.so.5.0 00:02:56.945 CC module/bdev/aio/bdev_aio_rpc.o 00:02:56.945 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:56.945 SYMLINK libspdk_bdev_zone_block.so 00:02:56.945 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:56.945 CC module/bdev/raid/concat.o 00:02:56.945 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:57.204 LIB libspdk_bdev_aio.a 00:02:57.204 SO libspdk_bdev_aio.so.5.0 00:02:57.204 LIB libspdk_bdev_iscsi.a 00:02:57.204 LIB libspdk_bdev_ftl.a 00:02:57.204 SYMLINK libspdk_bdev_aio.so 00:02:57.204 SO libspdk_bdev_iscsi.so.5.0 00:02:57.204 SO libspdk_bdev_ftl.so.5.0 00:02:57.204 LIB libspdk_bdev_raid.a 00:02:57.204 SYMLINK libspdk_bdev_iscsi.so 00:02:57.204 SYMLINK libspdk_bdev_ftl.so 00:02:57.204 LIB libspdk_bdev_virtio.a 00:02:57.204 SO libspdk_bdev_raid.so.5.0 00:02:57.462 SO libspdk_bdev_virtio.so.5.0 00:02:57.462 SYMLINK libspdk_bdev_raid.so 00:02:57.462 SYMLINK libspdk_bdev_virtio.so 00:02:58.397 LIB libspdk_bdev_nvme.a 00:02:58.397 SO libspdk_bdev_nvme.so.6.0 00:02:58.397 SYMLINK libspdk_bdev_nvme.so 00:02:58.655 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.655 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:58.655 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:58.655 CC module/event/subsystems/sock/sock.o 00:02:58.655 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.655 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.655 CC module/event/subsystems/vmd/vmd.o 00:02:58.655 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.914 LIB libspdk_event_sock.a 00:02:58.914 LIB libspdk_event_vhost_blk.a 00:02:58.914 LIB libspdk_event_scheduler.a 00:02:58.914 LIB libspdk_event_vmd.a 00:02:58.914 LIB libspdk_event_vfu_tgt.a 00:02:58.914 LIB libspdk_event_iobuf.a 00:02:58.914 SO libspdk_event_sock.so.4.0 00:02:58.914 SO libspdk_event_vhost_blk.so.2.0 00:02:58.914 SO libspdk_event_scheduler.so.3.0 00:02:58.914 SO libspdk_event_vfu_tgt.so.2.0 00:02:58.914 SO libspdk_event_vmd.so.5.0 00:02:58.914 SO libspdk_event_iobuf.so.2.0 00:02:58.914 SYMLINK libspdk_event_sock.so 00:02:58.914 SYMLINK libspdk_event_scheduler.so 00:02:58.914 SYMLINK libspdk_event_vmd.so 00:02:58.914 SYMLINK libspdk_event_vhost_blk.so 00:02:58.914 SYMLINK libspdk_event_vfu_tgt.so 00:02:58.914 SYMLINK libspdk_event_iobuf.so 00:02:59.172 CC module/event/subsystems/accel/accel.o 00:02:59.431 LIB libspdk_event_accel.a 00:02:59.431 SO libspdk_event_accel.so.5.0 00:02:59.431 SYMLINK libspdk_event_accel.so 00:02:59.689 CC module/event/subsystems/bdev/bdev.o 00:02:59.948 LIB libspdk_event_bdev.a 00:02:59.948 SO libspdk_event_bdev.so.5.0 00:02:59.948 SYMLINK libspdk_event_bdev.so 00:03:00.207 CC module/event/subsystems/scsi/scsi.o 00:03:00.207 CC module/event/subsystems/ublk/ublk.o 00:03:00.207 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:00.207 CC module/event/subsystems/nbd/nbd.o 00:03:00.207 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:00.207 LIB libspdk_event_ublk.a 00:03:00.207 LIB libspdk_event_nbd.a 00:03:00.207 LIB libspdk_event_scsi.a 00:03:00.207 SO libspdk_event_ublk.so.2.0 00:03:00.207 SO libspdk_event_nbd.so.5.0 00:03:00.207 SO libspdk_event_scsi.so.5.0 00:03:00.466 SYMLINK libspdk_event_ublk.so 00:03:00.466 SYMLINK libspdk_event_nbd.so 00:03:00.466 SYMLINK libspdk_event_scsi.so 00:03:00.466 LIB libspdk_event_nvmf.a 00:03:00.466 SO libspdk_event_nvmf.so.5.0 00:03:00.466 SYMLINK libspdk_event_nvmf.so 00:03:00.466 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:00.466 CC module/event/subsystems/iscsi/iscsi.o 00:03:00.725 LIB libspdk_event_vhost_scsi.a 00:03:00.725 LIB libspdk_event_iscsi.a 00:03:00.726 SO libspdk_event_vhost_scsi.so.2.0 00:03:00.726 SO libspdk_event_iscsi.so.5.0 00:03:00.726 SYMLINK libspdk_event_vhost_scsi.so 00:03:00.984 SYMLINK libspdk_event_iscsi.so 00:03:00.984 SO libspdk.so.5.0 00:03:00.984 SYMLINK libspdk.so 00:03:01.243 CXX app/trace/trace.o 00:03:01.243 CC app/trace_record/trace_record.o 00:03:01.243 CC app/iscsi_tgt/iscsi_tgt.o 00:03:01.243 CC app/nvmf_tgt/nvmf_main.o 00:03:01.243 CC examples/ioat/perf/perf.o 00:03:01.243 CC examples/accel/perf/accel_perf.o 00:03:01.243 CC app/spdk_tgt/spdk_tgt.o 00:03:01.243 CC examples/blob/hello_world/hello_blob.o 00:03:01.243 CC test/accel/dif/dif.o 00:03:01.243 CC examples/bdev/hello_world/hello_bdev.o 00:03:01.502 LINK nvmf_tgt 00:03:01.502 LINK spdk_trace_record 00:03:01.502 LINK iscsi_tgt 00:03:01.502 LINK spdk_tgt 00:03:01.502 LINK ioat_perf 00:03:01.502 LINK hello_blob 00:03:01.502 LINK hello_bdev 00:03:01.760 LINK spdk_trace 00:03:01.760 LINK accel_perf 00:03:01.760 CC examples/blob/cli/blobcli.o 00:03:01.760 LINK dif 00:03:01.760 CC examples/ioat/verify/verify.o 00:03:01.760 CC examples/nvme/hello_world/hello_world.o 00:03:01.760 CC examples/sock/hello_world/hello_sock.o 00:03:01.760 CC examples/vmd/lsvmd/lsvmd.o 00:03:02.018 CC examples/bdev/bdevperf/bdevperf.o 00:03:02.018 CC app/spdk_lspci/spdk_lspci.o 00:03:02.018 CC examples/nvmf/nvmf/nvmf.o 00:03:02.018 LINK verify 00:03:02.018 CC test/app/bdev_svc/bdev_svc.o 00:03:02.018 CC examples/nvme/reconnect/reconnect.o 00:03:02.018 LINK lsvmd 00:03:02.018 LINK spdk_lspci 00:03:02.018 LINK hello_world 00:03:02.018 LINK hello_sock 00:03:02.277 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:02.277 LINK bdev_svc 00:03:02.277 LINK blobcli 00:03:02.277 CC examples/vmd/led/led.o 00:03:02.277 CC app/spdk_nvme_perf/perf.o 00:03:02.277 LINK nvmf 00:03:02.277 CC app/spdk_nvme_identify/identify.o 00:03:02.277 CC app/spdk_nvme_discover/discovery_aer.o 00:03:02.277 LINK reconnect 00:03:02.536 LINK led 00:03:02.536 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.536 LINK spdk_nvme_discover 00:03:02.536 CC examples/util/zipf/zipf.o 00:03:02.536 CC examples/nvme/arbitration/arbitration.o 00:03:02.536 CC examples/nvme/hotplug/hotplug.o 00:03:02.793 LINK nvme_manage 00:03:02.793 LINK bdevperf 00:03:02.793 CC examples/thread/thread/thread_ex.o 00:03:02.793 LINK zipf 00:03:02.793 CC examples/idxd/perf/perf.o 00:03:02.793 LINK hotplug 00:03:03.051 LINK arbitration 00:03:03.051 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:03.051 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.051 LINK thread 00:03:03.051 LINK nvme_fuzz 00:03:03.051 LINK spdk_nvme_perf 00:03:03.051 CC app/spdk_top/spdk_top.o 00:03:03.051 LINK spdk_nvme_identify 00:03:03.051 LINK interrupt_tgt 00:03:03.310 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:03.310 LINK idxd_perf 00:03:03.310 CC app/vhost/vhost.o 00:03:03.310 CC test/app/histogram_perf/histogram_perf.o 00:03:03.310 CC test/app/jsoncat/jsoncat.o 00:03:03.310 CC test/app/stub/stub.o 00:03:03.310 LINK cmb_copy 00:03:03.310 CC examples/nvme/abort/abort.o 00:03:03.568 LINK vhost 00:03:03.568 LINK jsoncat 00:03:03.568 LINK histogram_perf 00:03:03.568 CC test/bdev/bdevio/bdevio.o 00:03:03.568 LINK stub 00:03:03.568 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:03.827 CC app/spdk_dd/spdk_dd.o 00:03:03.827 TEST_HEADER include/spdk/accel.h 00:03:03.827 CC app/fio/nvme/fio_plugin.o 00:03:03.827 TEST_HEADER include/spdk/accel_module.h 00:03:03.827 TEST_HEADER include/spdk/assert.h 00:03:03.827 TEST_HEADER include/spdk/barrier.h 00:03:03.827 TEST_HEADER include/spdk/base64.h 00:03:03.827 TEST_HEADER include/spdk/bdev.h 00:03:03.827 TEST_HEADER include/spdk/bdev_module.h 00:03:03.827 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.827 TEST_HEADER include/spdk/bit_array.h 00:03:03.827 TEST_HEADER include/spdk/bit_pool.h 00:03:03.827 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.827 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.827 TEST_HEADER include/spdk/blobfs.h 00:03:03.827 TEST_HEADER include/spdk/blob.h 00:03:03.827 TEST_HEADER include/spdk/conf.h 00:03:03.827 TEST_HEADER include/spdk/config.h 00:03:03.827 TEST_HEADER include/spdk/cpuset.h 00:03:03.827 TEST_HEADER include/spdk/crc16.h 00:03:03.827 TEST_HEADER include/spdk/crc32.h 00:03:03.827 TEST_HEADER include/spdk/crc64.h 00:03:03.827 LINK abort 00:03:03.827 TEST_HEADER include/spdk/dif.h 00:03:03.827 TEST_HEADER include/spdk/dma.h 00:03:03.827 TEST_HEADER include/spdk/endian.h 00:03:03.827 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.827 TEST_HEADER include/spdk/env.h 00:03:03.827 TEST_HEADER include/spdk/event.h 00:03:03.827 TEST_HEADER include/spdk/fd_group.h 00:03:03.827 TEST_HEADER include/spdk/fd.h 00:03:03.827 TEST_HEADER include/spdk/file.h 00:03:03.827 TEST_HEADER include/spdk/ftl.h 00:03:03.827 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.827 TEST_HEADER include/spdk/hexlify.h 00:03:03.827 TEST_HEADER include/spdk/histogram_data.h 00:03:03.827 TEST_HEADER include/spdk/idxd.h 00:03:03.827 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.827 LINK pmr_persistence 00:03:03.827 TEST_HEADER include/spdk/init.h 00:03:03.827 TEST_HEADER include/spdk/ioat.h 00:03:03.827 CC test/blobfs/mkfs/mkfs.o 00:03:03.827 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.827 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.827 TEST_HEADER include/spdk/json.h 00:03:03.827 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.827 TEST_HEADER include/spdk/likely.h 00:03:03.827 TEST_HEADER include/spdk/log.h 00:03:03.827 TEST_HEADER include/spdk/lvol.h 00:03:03.827 TEST_HEADER include/spdk/memory.h 00:03:03.827 TEST_HEADER include/spdk/mmio.h 00:03:03.827 TEST_HEADER include/spdk/nbd.h 00:03:03.827 TEST_HEADER include/spdk/notify.h 00:03:03.827 TEST_HEADER include/spdk/nvme.h 00:03:03.827 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.827 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.827 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.827 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.827 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.827 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.827 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.827 TEST_HEADER include/spdk/nvmf.h 00:03:03.827 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.827 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.827 TEST_HEADER include/spdk/opal.h 00:03:03.827 TEST_HEADER include/spdk/opal_spec.h 00:03:03.827 TEST_HEADER include/spdk/pci_ids.h 00:03:03.827 TEST_HEADER include/spdk/pipe.h 00:03:03.827 TEST_HEADER include/spdk/queue.h 00:03:03.827 TEST_HEADER include/spdk/reduce.h 00:03:03.827 TEST_HEADER include/spdk/rpc.h 00:03:03.827 TEST_HEADER include/spdk/scheduler.h 00:03:03.827 TEST_HEADER include/spdk/scsi.h 00:03:03.827 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.827 TEST_HEADER include/spdk/sock.h 00:03:03.827 TEST_HEADER include/spdk/stdinc.h 00:03:03.827 TEST_HEADER include/spdk/string.h 00:03:03.827 TEST_HEADER include/spdk/thread.h 00:03:03.827 LINK bdevio 00:03:03.827 TEST_HEADER include/spdk/trace.h 00:03:03.827 TEST_HEADER include/spdk/trace_parser.h 00:03:03.827 TEST_HEADER include/spdk/tree.h 00:03:03.827 TEST_HEADER include/spdk/ublk.h 00:03:03.827 TEST_HEADER include/spdk/util.h 00:03:03.827 TEST_HEADER include/spdk/uuid.h 00:03:03.827 TEST_HEADER include/spdk/version.h 00:03:03.827 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.827 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.827 TEST_HEADER include/spdk/vhost.h 00:03:03.827 TEST_HEADER include/spdk/vmd.h 00:03:03.827 TEST_HEADER include/spdk/xor.h 00:03:03.827 TEST_HEADER include/spdk/zipf.h 00:03:03.827 CXX test/cpp_headers/accel.o 00:03:03.827 CXX test/cpp_headers/accel_module.o 00:03:04.086 LINK spdk_top 00:03:04.086 LINK mkfs 00:03:04.086 LINK spdk_dd 00:03:04.086 CXX test/cpp_headers/assert.o 00:03:04.344 CC test/dma/test_dma/test_dma.o 00:03:04.344 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.344 CC app/fio/bdev/fio_plugin.o 00:03:04.344 CXX test/cpp_headers/barrier.o 00:03:04.344 LINK spdk_nvme 00:03:04.344 CC test/env/mem_callbacks/mem_callbacks.o 00:03:04.603 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.603 CC test/event/event_perf/event_perf.o 00:03:04.603 CXX test/cpp_headers/base64.o 00:03:04.603 CC test/event/reactor/reactor.o 00:03:04.603 CC test/lvol/esnap/esnap.o 00:03:04.603 LINK test_dma 00:03:04.603 LINK iscsi_fuzz 00:03:04.603 LINK event_perf 00:03:04.861 LINK reactor 00:03:04.862 CXX test/cpp_headers/bdev.o 00:03:04.862 CXX test/cpp_headers/bdev_module.o 00:03:04.862 LINK spdk_bdev 00:03:04.862 CXX test/cpp_headers/bdev_zone.o 00:03:04.862 LINK vhost_fuzz 00:03:04.862 CC test/event/reactor_perf/reactor_perf.o 00:03:05.120 CC test/event/app_repeat/app_repeat.o 00:03:05.120 CXX test/cpp_headers/bit_array.o 00:03:05.120 CC test/event/scheduler/scheduler.o 00:03:05.120 CXX test/cpp_headers/bit_pool.o 00:03:05.120 LINK mem_callbacks 00:03:05.120 CC test/rpc_client/rpc_client_test.o 00:03:05.120 CC test/nvme/aer/aer.o 00:03:05.120 LINK reactor_perf 00:03:05.120 LINK app_repeat 00:03:05.379 CXX test/cpp_headers/blob_bdev.o 00:03:05.379 CC test/nvme/reset/reset.o 00:03:05.379 CC test/env/vtophys/vtophys.o 00:03:05.379 CXX test/cpp_headers/blobfs_bdev.o 00:03:05.379 LINK rpc_client_test 00:03:05.379 LINK scheduler 00:03:05.379 CXX test/cpp_headers/blobfs.o 00:03:05.379 LINK aer 00:03:05.379 LINK vtophys 00:03:05.637 CXX test/cpp_headers/blob.o 00:03:05.637 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.637 LINK reset 00:03:05.637 CXX test/cpp_headers/conf.o 00:03:05.637 CXX test/cpp_headers/config.o 00:03:05.637 CC test/env/memory/memory_ut.o 00:03:05.896 CC test/nvme/sgl/sgl.o 00:03:05.896 CC test/nvme/e2edp/nvme_dp.o 00:03:05.896 LINK env_dpdk_post_init 00:03:05.896 CC test/thread/poller_perf/poller_perf.o 00:03:05.896 CC test/env/pci/pci_ut.o 00:03:05.896 CXX test/cpp_headers/cpuset.o 00:03:05.896 CC test/nvme/overhead/overhead.o 00:03:06.155 CXX test/cpp_headers/crc16.o 00:03:06.155 CC test/nvme/err_injection/err_injection.o 00:03:06.155 LINK nvme_dp 00:03:06.155 LINK poller_perf 00:03:06.155 LINK sgl 00:03:06.155 CXX test/cpp_headers/crc32.o 00:03:06.413 LINK err_injection 00:03:06.413 LINK overhead 00:03:06.413 CC test/nvme/reserve/reserve.o 00:03:06.413 CC test/nvme/startup/startup.o 00:03:06.413 LINK pci_ut 00:03:06.413 CC test/nvme/simple_copy/simple_copy.o 00:03:06.413 CXX test/cpp_headers/crc64.o 00:03:06.413 CXX test/cpp_headers/dif.o 00:03:06.413 CC test/nvme/connect_stress/connect_stress.o 00:03:06.670 LINK startup 00:03:06.670 LINK reserve 00:03:06.670 CXX test/cpp_headers/dma.o 00:03:06.670 CXX test/cpp_headers/endian.o 00:03:06.670 LINK simple_copy 00:03:06.670 CC test/nvme/boot_partition/boot_partition.o 00:03:06.670 LINK connect_stress 00:03:06.927 CC test/nvme/compliance/nvme_compliance.o 00:03:06.927 LINK memory_ut 00:03:06.927 CXX test/cpp_headers/env_dpdk.o 00:03:06.927 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.927 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:07.185 CC test/nvme/fdp/fdp.o 00:03:07.185 LINK boot_partition 00:03:07.185 CXX test/cpp_headers/env.o 00:03:07.185 LINK fused_ordering 00:03:07.185 CC test/nvme/cuse/cuse.o 00:03:07.185 CXX test/cpp_headers/event.o 00:03:07.185 LINK nvme_compliance 00:03:07.444 LINK doorbell_aers 00:03:07.444 CXX test/cpp_headers/fd_group.o 00:03:07.444 CXX test/cpp_headers/fd.o 00:03:07.444 CXX test/cpp_headers/file.o 00:03:07.444 LINK fdp 00:03:07.444 CXX test/cpp_headers/ftl.o 00:03:07.444 CXX test/cpp_headers/gpt_spec.o 00:03:07.444 CXX test/cpp_headers/hexlify.o 00:03:07.701 CXX test/cpp_headers/histogram_data.o 00:03:07.701 CXX test/cpp_headers/idxd.o 00:03:07.701 CXX test/cpp_headers/idxd_spec.o 00:03:07.701 CXX test/cpp_headers/init.o 00:03:07.701 CXX test/cpp_headers/ioat.o 00:03:07.701 CXX test/cpp_headers/ioat_spec.o 00:03:07.701 CXX test/cpp_headers/iscsi_spec.o 00:03:07.959 CXX test/cpp_headers/json.o 00:03:07.959 CXX test/cpp_headers/jsonrpc.o 00:03:07.959 CXX test/cpp_headers/likely.o 00:03:07.959 CXX test/cpp_headers/log.o 00:03:07.959 CXX test/cpp_headers/lvol.o 00:03:07.959 CXX test/cpp_headers/memory.o 00:03:07.959 CXX test/cpp_headers/mmio.o 00:03:08.217 CXX test/cpp_headers/nbd.o 00:03:08.217 CXX test/cpp_headers/notify.o 00:03:08.217 CXX test/cpp_headers/nvme.o 00:03:08.217 CXX test/cpp_headers/nvme_intel.o 00:03:08.217 CXX test/cpp_headers/nvme_ocssd.o 00:03:08.217 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:08.217 CXX test/cpp_headers/nvme_spec.o 00:03:08.217 CXX test/cpp_headers/nvme_zns.o 00:03:08.217 CXX test/cpp_headers/nvmf_cmd.o 00:03:08.506 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:08.506 CXX test/cpp_headers/nvmf.o 00:03:08.506 CXX test/cpp_headers/nvmf_spec.o 00:03:08.506 CXX test/cpp_headers/nvmf_transport.o 00:03:08.506 CXX test/cpp_headers/opal.o 00:03:08.506 CXX test/cpp_headers/opal_spec.o 00:03:08.506 CXX test/cpp_headers/pci_ids.o 00:03:08.770 LINK cuse 00:03:08.770 CXX test/cpp_headers/pipe.o 00:03:08.770 CXX test/cpp_headers/queue.o 00:03:08.770 CXX test/cpp_headers/reduce.o 00:03:08.770 CXX test/cpp_headers/rpc.o 00:03:08.770 CXX test/cpp_headers/scheduler.o 00:03:08.770 CXX test/cpp_headers/scsi.o 00:03:08.770 CXX test/cpp_headers/scsi_spec.o 00:03:08.770 CXX test/cpp_headers/sock.o 00:03:08.770 CXX test/cpp_headers/stdinc.o 00:03:08.770 CXX test/cpp_headers/string.o 00:03:08.770 CXX test/cpp_headers/thread.o 00:03:08.770 CXX test/cpp_headers/trace.o 00:03:08.770 CXX test/cpp_headers/trace_parser.o 00:03:09.029 CXX test/cpp_headers/tree.o 00:03:09.029 CXX test/cpp_headers/ublk.o 00:03:09.029 CXX test/cpp_headers/util.o 00:03:09.029 CXX test/cpp_headers/uuid.o 00:03:09.029 CXX test/cpp_headers/version.o 00:03:09.029 CXX test/cpp_headers/vfio_user_pci.o 00:03:09.029 CXX test/cpp_headers/vfio_user_spec.o 00:03:09.029 CXX test/cpp_headers/vhost.o 00:03:09.029 CXX test/cpp_headers/vmd.o 00:03:09.029 CXX test/cpp_headers/xor.o 00:03:09.029 CXX test/cpp_headers/zipf.o 00:03:09.966 LINK esnap 00:03:10.534 00:03:10.534 real 1m0.599s 00:03:10.534 user 6m27.273s 00:03:10.534 sys 1m36.191s 00:03:10.534 22:03:06 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:10.534 22:03:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.534 ************************************ 00:03:10.534 END TEST make 00:03:10.534 ************************************ 00:03:10.534 22:03:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:10.534 22:03:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:10.534 22:03:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:10.534 22:03:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:10.534 22:03:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:10.534 22:03:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:10.534 22:03:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:10.534 22:03:07 -- scripts/common.sh@335 -- # IFS=.-: 00:03:10.534 22:03:07 -- scripts/common.sh@335 -- # read -ra ver1 00:03:10.534 22:03:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.534 22:03:07 -- scripts/common.sh@336 -- # read -ra ver2 00:03:10.534 22:03:07 -- scripts/common.sh@337 -- # local 'op=<' 00:03:10.534 22:03:07 -- scripts/common.sh@339 -- # ver1_l=2 00:03:10.534 22:03:07 -- scripts/common.sh@340 -- # ver2_l=1 00:03:10.534 22:03:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:10.534 22:03:07 -- scripts/common.sh@343 -- # case "$op" in 00:03:10.534 22:03:07 -- scripts/common.sh@344 -- # : 1 00:03:10.534 22:03:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:10.534 22:03:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.534 22:03:07 -- scripts/common.sh@364 -- # decimal 1 00:03:10.534 22:03:07 -- scripts/common.sh@352 -- # local d=1 00:03:10.534 22:03:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.534 22:03:07 -- scripts/common.sh@354 -- # echo 1 00:03:10.534 22:03:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:10.534 22:03:07 -- scripts/common.sh@365 -- # decimal 2 00:03:10.534 22:03:07 -- scripts/common.sh@352 -- # local d=2 00:03:10.534 22:03:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.534 22:03:07 -- scripts/common.sh@354 -- # echo 2 00:03:10.534 22:03:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:10.534 22:03:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:10.534 22:03:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:10.534 22:03:07 -- scripts/common.sh@367 -- # return 0 00:03:10.534 22:03:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.534 22:03:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:10.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.534 --rc genhtml_branch_coverage=1 00:03:10.534 --rc genhtml_function_coverage=1 00:03:10.534 --rc genhtml_legend=1 00:03:10.534 --rc geninfo_all_blocks=1 00:03:10.534 --rc geninfo_unexecuted_blocks=1 00:03:10.534 00:03:10.534 ' 00:03:10.534 22:03:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:10.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.534 --rc genhtml_branch_coverage=1 00:03:10.534 --rc genhtml_function_coverage=1 00:03:10.534 --rc genhtml_legend=1 00:03:10.534 --rc geninfo_all_blocks=1 00:03:10.534 --rc geninfo_unexecuted_blocks=1 00:03:10.534 00:03:10.534 ' 00:03:10.534 22:03:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:10.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.534 --rc genhtml_branch_coverage=1 00:03:10.534 --rc genhtml_function_coverage=1 00:03:10.534 --rc genhtml_legend=1 00:03:10.534 --rc geninfo_all_blocks=1 00:03:10.534 --rc geninfo_unexecuted_blocks=1 00:03:10.534 00:03:10.534 ' 00:03:10.534 22:03:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:10.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.534 --rc genhtml_branch_coverage=1 00:03:10.534 --rc genhtml_function_coverage=1 00:03:10.534 --rc genhtml_legend=1 00:03:10.534 --rc geninfo_all_blocks=1 00:03:10.534 --rc geninfo_unexecuted_blocks=1 00:03:10.534 00:03:10.534 ' 00:03:10.534 22:03:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:10.534 22:03:07 -- nvmf/common.sh@7 -- # uname -s 00:03:10.534 22:03:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.534 22:03:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.534 22:03:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.534 22:03:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.534 22:03:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.534 22:03:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.534 22:03:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.534 22:03:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.534 22:03:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.534 22:03:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.794 22:03:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:03:10.794 22:03:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:03:10.794 22:03:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.794 22:03:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.794 22:03:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:10.794 22:03:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:10.794 22:03:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.795 22:03:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.795 22:03:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.795 22:03:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.795 22:03:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.795 22:03:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.795 22:03:07 -- paths/export.sh@5 -- # export PATH 00:03:10.795 22:03:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.795 22:03:07 -- nvmf/common.sh@46 -- # : 0 00:03:10.795 22:03:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:10.795 22:03:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:10.795 22:03:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:10.795 22:03:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.795 22:03:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.795 22:03:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:10.795 22:03:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:10.795 22:03:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:10.795 22:03:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.795 22:03:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.795 22:03:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.795 22:03:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.795 22:03:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:10.795 22:03:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.795 22:03:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:10.795 22:03:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.795 22:03:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.795 22:03:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.795 22:03:07 -- spdk/autotest.sh@48 -- # udevadm_pid=49725 00:03:10.795 22:03:07 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:10.795 22:03:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.795 22:03:07 -- spdk/autotest.sh@54 -- # echo 49744 00:03:10.795 22:03:07 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:10.795 22:03:07 -- spdk/autotest.sh@56 -- # echo 49747 00:03:10.795 22:03:07 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:10.795 22:03:07 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:10.795 22:03:07 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.795 22:03:07 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:10.795 22:03:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:10.795 22:03:07 -- common/autotest_common.sh@10 -- # set +x 00:03:10.795 22:03:07 -- spdk/autotest.sh@70 -- # create_test_list 00:03:10.795 22:03:07 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:10.795 22:03:07 -- common/autotest_common.sh@10 -- # set +x 00:03:10.795 22:03:07 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:10.795 22:03:07 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:10.795 22:03:07 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:10.795 22:03:07 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:10.795 22:03:07 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:10.795 22:03:07 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:10.795 22:03:07 -- common/autotest_common.sh@1450 -- # uname 00:03:10.795 22:03:07 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:10.795 22:03:07 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:10.795 22:03:07 -- common/autotest_common.sh@1470 -- # uname 00:03:10.795 22:03:07 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:10.795 22:03:07 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:10.795 22:03:07 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:10.795 lcov: LCOV version 1.15 00:03:10.795 22:03:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:18.919 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:18.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:18.919 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:18.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:18.919 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:18.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:40.885 22:03:33 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:40.885 22:03:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:40.885 22:03:33 -- common/autotest_common.sh@10 -- # set +x 00:03:40.885 22:03:33 -- spdk/autotest.sh@89 -- # rm -f 00:03:40.885 22:03:33 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.885 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:40.885 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:40.885 22:03:34 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:40.885 22:03:34 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:40.885 22:03:34 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:40.885 22:03:34 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:40.885 22:03:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:40.885 22:03:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:40.885 22:03:34 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:40.885 22:03:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.885 22:03:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:40.885 22:03:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:40.885 22:03:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:40.885 22:03:34 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:40.885 22:03:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:40.885 22:03:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:40.885 22:03:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:40.885 22:03:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:40.885 22:03:34 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:40.885 22:03:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:40.885 22:03:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:40.885 22:03:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:40.885 22:03:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:40.885 22:03:34 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:40.885 22:03:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:40.885 22:03:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:40.885 22:03:34 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:40.885 22:03:34 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:03:40.885 22:03:34 -- spdk/autotest.sh@108 -- # grep -v p 00:03:40.885 22:03:34 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:40.885 22:03:34 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:40.885 22:03:34 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:40.885 22:03:34 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:40.885 22:03:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:40.885 No valid GPT data, bailing 00:03:40.885 22:03:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.885 22:03:34 -- scripts/common.sh@393 -- # pt= 00:03:40.885 22:03:34 -- scripts/common.sh@394 -- # return 1 00:03:40.885 22:03:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:40.885 1+0 records in 00:03:40.885 1+0 records out 00:03:40.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463549 s, 226 MB/s 00:03:40.885 22:03:34 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:40.885 22:03:34 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:40.885 22:03:34 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:40.885 22:03:34 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:40.885 22:03:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:40.885 No valid GPT data, bailing 00:03:40.885 22:03:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:40.885 22:03:34 -- scripts/common.sh@393 -- # pt= 00:03:40.885 22:03:34 -- scripts/common.sh@394 -- # return 1 00:03:40.885 22:03:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:40.885 1+0 records in 00:03:40.885 1+0 records out 00:03:40.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00523328 s, 200 MB/s 00:03:40.885 22:03:34 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:40.885 22:03:34 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:40.885 22:03:34 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:03:40.885 22:03:34 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:03:40.885 22:03:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:40.885 No valid GPT data, bailing 00:03:40.885 22:03:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:40.885 22:03:34 -- scripts/common.sh@393 -- # pt= 00:03:40.885 22:03:34 -- scripts/common.sh@394 -- # return 1 00:03:40.885 22:03:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:40.885 1+0 records in 00:03:40.885 1+0 records out 00:03:40.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516865 s, 203 MB/s 00:03:40.885 22:03:34 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:40.885 22:03:34 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:40.885 22:03:34 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:03:40.885 22:03:34 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:03:40.885 22:03:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:40.885 No valid GPT data, bailing 00:03:40.885 22:03:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:40.885 22:03:34 -- scripts/common.sh@393 -- # pt= 00:03:40.885 22:03:34 -- scripts/common.sh@394 -- # return 1 00:03:40.885 22:03:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:40.885 1+0 records in 00:03:40.885 1+0 records out 00:03:40.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540129 s, 194 MB/s 00:03:40.885 22:03:34 -- spdk/autotest.sh@116 -- # sync 00:03:40.885 22:03:34 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:40.885 22:03:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:40.885 22:03:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:40.885 22:03:36 -- spdk/autotest.sh@122 -- # uname -s 00:03:40.885 22:03:36 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:40.885 22:03:36 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:40.885 22:03:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.885 22:03:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.885 22:03:36 -- common/autotest_common.sh@10 -- # set +x 00:03:40.885 ************************************ 00:03:40.885 START TEST setup.sh 00:03:40.885 ************************************ 00:03:40.885 22:03:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:40.885 * Looking for test storage... 00:03:40.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:40.885 22:03:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:40.885 22:03:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:40.885 22:03:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:40.885 22:03:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:40.885 22:03:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:40.885 22:03:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:40.885 22:03:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:40.885 22:03:36 -- scripts/common.sh@335 -- # IFS=.-: 00:03:40.885 22:03:36 -- scripts/common.sh@335 -- # read -ra ver1 00:03:40.885 22:03:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.885 22:03:36 -- scripts/common.sh@336 -- # read -ra ver2 00:03:40.885 22:03:36 -- scripts/common.sh@337 -- # local 'op=<' 00:03:40.885 22:03:36 -- scripts/common.sh@339 -- # ver1_l=2 00:03:40.885 22:03:36 -- scripts/common.sh@340 -- # ver2_l=1 00:03:40.885 22:03:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:40.885 22:03:36 -- scripts/common.sh@343 -- # case "$op" in 00:03:40.885 22:03:36 -- scripts/common.sh@344 -- # : 1 00:03:40.885 22:03:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:40.885 22:03:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.886 22:03:36 -- scripts/common.sh@364 -- # decimal 1 00:03:40.886 22:03:36 -- scripts/common.sh@352 -- # local d=1 00:03:40.886 22:03:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.886 22:03:36 -- scripts/common.sh@354 -- # echo 1 00:03:40.886 22:03:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:40.886 22:03:36 -- scripts/common.sh@365 -- # decimal 2 00:03:40.886 22:03:36 -- scripts/common.sh@352 -- # local d=2 00:03:40.886 22:03:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.886 22:03:36 -- scripts/common.sh@354 -- # echo 2 00:03:40.886 22:03:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:40.886 22:03:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:40.886 22:03:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:40.886 22:03:36 -- scripts/common.sh@367 -- # return 0 00:03:40.886 22:03:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.886 22:03:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:40.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.886 --rc genhtml_branch_coverage=1 00:03:40.886 --rc genhtml_function_coverage=1 00:03:40.886 --rc genhtml_legend=1 00:03:40.886 --rc geninfo_all_blocks=1 00:03:40.886 --rc geninfo_unexecuted_blocks=1 00:03:40.886 00:03:40.886 ' 00:03:40.886 22:03:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:40.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.886 --rc genhtml_branch_coverage=1 00:03:40.886 --rc genhtml_function_coverage=1 00:03:40.886 --rc genhtml_legend=1 00:03:40.886 --rc geninfo_all_blocks=1 00:03:40.886 --rc geninfo_unexecuted_blocks=1 00:03:40.886 00:03:40.886 ' 00:03:40.886 22:03:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:40.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.886 --rc genhtml_branch_coverage=1 00:03:40.886 --rc genhtml_function_coverage=1 00:03:40.886 --rc genhtml_legend=1 00:03:40.886 --rc geninfo_all_blocks=1 00:03:40.886 --rc geninfo_unexecuted_blocks=1 00:03:40.886 00:03:40.886 ' 00:03:40.886 22:03:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:40.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.886 --rc genhtml_branch_coverage=1 00:03:40.886 --rc genhtml_function_coverage=1 00:03:40.886 --rc genhtml_legend=1 00:03:40.886 --rc geninfo_all_blocks=1 00:03:40.886 --rc geninfo_unexecuted_blocks=1 00:03:40.886 00:03:40.886 ' 00:03:40.886 22:03:36 -- setup/test-setup.sh@10 -- # uname -s 00:03:40.886 22:03:36 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:40.886 22:03:36 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:40.886 22:03:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.886 22:03:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.886 22:03:36 -- common/autotest_common.sh@10 -- # set +x 00:03:40.886 ************************************ 00:03:40.886 START TEST acl 00:03:40.886 ************************************ 00:03:40.886 22:03:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:40.886 * Looking for test storage... 00:03:40.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:40.886 22:03:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:40.886 22:03:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:40.886 22:03:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:40.886 22:03:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:40.886 22:03:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:40.886 22:03:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:40.886 22:03:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:40.886 22:03:37 -- scripts/common.sh@335 -- # IFS=.-: 00:03:40.886 22:03:37 -- scripts/common.sh@335 -- # read -ra ver1 00:03:40.886 22:03:37 -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.886 22:03:37 -- scripts/common.sh@336 -- # read -ra ver2 00:03:40.886 22:03:37 -- scripts/common.sh@337 -- # local 'op=<' 00:03:40.886 22:03:37 -- scripts/common.sh@339 -- # ver1_l=2 00:03:40.886 22:03:37 -- scripts/common.sh@340 -- # ver2_l=1 00:03:40.886 22:03:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:40.886 22:03:37 -- scripts/common.sh@343 -- # case "$op" in 00:03:40.886 22:03:37 -- scripts/common.sh@344 -- # : 1 00:03:40.886 22:03:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:40.886 22:03:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.886 22:03:37 -- scripts/common.sh@364 -- # decimal 1 00:03:40.886 22:03:37 -- scripts/common.sh@352 -- # local d=1 00:03:40.886 22:03:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.886 22:03:37 -- scripts/common.sh@354 -- # echo 1 00:03:40.886 22:03:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:40.886 22:03:37 -- scripts/common.sh@365 -- # decimal 2 00:03:40.886 22:03:37 -- scripts/common.sh@352 -- # local d=2 00:03:40.886 22:03:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.886 22:03:37 -- scripts/common.sh@354 -- # echo 2 00:03:40.886 22:03:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:40.886 22:03:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:40.886 22:03:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:40.886 22:03:37 -- scripts/common.sh@367 -- # return 0 00:03:40.886 22:03:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.886 22:03:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:40.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.886 --rc genhtml_branch_coverage=1 00:03:40.886 --rc genhtml_function_coverage=1 00:03:40.886 --rc genhtml_legend=1 00:03:40.886 --rc geninfo_all_blocks=1 00:03:40.886 --rc geninfo_unexecuted_blocks=1 00:03:40.886 00:03:40.886 ' 00:03:40.886 22:03:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:40.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.886 --rc genhtml_branch_coverage=1 00:03:40.886 --rc genhtml_function_coverage=1 00:03:40.886 --rc genhtml_legend=1 00:03:40.886 --rc geninfo_all_blocks=1 00:03:40.886 --rc geninfo_unexecuted_blocks=1 00:03:40.886 00:03:40.886 ' 00:03:40.886 22:03:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:40.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.886 --rc genhtml_branch_coverage=1 00:03:40.886 --rc genhtml_function_coverage=1 00:03:40.886 --rc genhtml_legend=1 00:03:40.886 --rc geninfo_all_blocks=1 00:03:40.886 --rc geninfo_unexecuted_blocks=1 00:03:40.886 00:03:40.886 ' 00:03:40.886 22:03:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:40.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.886 --rc genhtml_branch_coverage=1 00:03:40.886 --rc genhtml_function_coverage=1 00:03:40.886 --rc genhtml_legend=1 00:03:40.886 --rc geninfo_all_blocks=1 00:03:40.886 --rc geninfo_unexecuted_blocks=1 00:03:40.886 00:03:40.886 ' 00:03:40.886 22:03:37 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:40.886 22:03:37 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:40.886 22:03:37 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:40.886 22:03:37 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:40.886 22:03:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:40.886 22:03:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:40.886 22:03:37 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:40.886 22:03:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.886 22:03:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:40.886 22:03:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:40.886 22:03:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:40.886 22:03:37 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:40.886 22:03:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:40.886 22:03:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:40.886 22:03:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:40.886 22:03:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:40.886 22:03:37 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:40.886 22:03:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:40.886 22:03:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:40.886 22:03:37 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:40.886 22:03:37 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:40.886 22:03:37 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:40.886 22:03:37 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:40.886 22:03:37 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:40.886 22:03:37 -- setup/acl.sh@12 -- # devs=() 00:03:40.886 22:03:37 -- setup/acl.sh@12 -- # declare -a devs 00:03:40.886 22:03:37 -- setup/acl.sh@13 -- # drivers=() 00:03:40.886 22:03:37 -- setup/acl.sh@13 -- # declare -A drivers 00:03:40.886 22:03:37 -- setup/acl.sh@51 -- # setup reset 00:03:40.886 22:03:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.886 22:03:37 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.454 22:03:37 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:41.454 22:03:37 -- setup/acl.sh@16 -- # local dev driver 00:03:41.454 22:03:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.454 22:03:37 -- setup/acl.sh@15 -- # setup output status 00:03:41.454 22:03:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.454 22:03:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:41.713 Hugepages 00:03:41.713 node hugesize free / total 00:03:41.713 22:03:38 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:41.713 22:03:38 -- setup/acl.sh@19 -- # continue 00:03:41.713 22:03:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.713 00:03:41.713 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:41.713 22:03:38 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:41.713 22:03:38 -- setup/acl.sh@19 -- # continue 00:03:41.713 22:03:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.713 22:03:38 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:41.713 22:03:38 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:41.713 22:03:38 -- setup/acl.sh@20 -- # continue 00:03:41.713 22:03:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.713 22:03:38 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:41.713 22:03:38 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:41.713 22:03:38 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:41.713 22:03:38 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:41.713 22:03:38 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:41.713 22:03:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.971 22:03:38 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:41.971 22:03:38 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:41.971 22:03:38 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:41.971 22:03:38 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:41.971 22:03:38 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:41.971 22:03:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.971 22:03:38 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:41.971 22:03:38 -- setup/acl.sh@54 -- # run_test denied denied 00:03:41.971 22:03:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.971 22:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.971 22:03:38 -- common/autotest_common.sh@10 -- # set +x 00:03:41.971 ************************************ 00:03:41.971 START TEST denied 00:03:41.971 ************************************ 00:03:41.971 22:03:38 -- common/autotest_common.sh@1114 -- # denied 00:03:41.971 22:03:38 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:41.971 22:03:38 -- setup/acl.sh@38 -- # setup output config 00:03:41.971 22:03:38 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:41.971 22:03:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.971 22:03:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:42.906 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:42.906 22:03:39 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:42.906 22:03:39 -- setup/acl.sh@28 -- # local dev driver 00:03:42.906 22:03:39 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:42.906 22:03:39 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:42.906 22:03:39 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:42.906 22:03:39 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:42.906 22:03:39 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:42.906 22:03:39 -- setup/acl.sh@41 -- # setup reset 00:03:42.906 22:03:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.906 22:03:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.473 00:03:43.473 real 0m1.500s 00:03:43.473 user 0m0.599s 00:03:43.473 sys 0m0.830s 00:03:43.473 22:03:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:43.473 22:03:39 -- common/autotest_common.sh@10 -- # set +x 00:03:43.473 ************************************ 00:03:43.473 END TEST denied 00:03:43.473 ************************************ 00:03:43.473 22:03:39 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:43.473 22:03:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.473 22:03:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.473 22:03:39 -- common/autotest_common.sh@10 -- # set +x 00:03:43.473 ************************************ 00:03:43.473 START TEST allowed 00:03:43.473 ************************************ 00:03:43.473 22:03:39 -- common/autotest_common.sh@1114 -- # allowed 00:03:43.473 22:03:39 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:43.473 22:03:39 -- setup/acl.sh@45 -- # setup output config 00:03:43.473 22:03:39 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:43.473 22:03:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.473 22:03:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:44.408 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.408 22:03:40 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:03:44.408 22:03:40 -- setup/acl.sh@28 -- # local dev driver 00:03:44.408 22:03:40 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:44.408 22:03:40 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:44.408 22:03:40 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:44.408 22:03:40 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:44.408 22:03:40 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:44.408 22:03:40 -- setup/acl.sh@48 -- # setup reset 00:03:44.408 22:03:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.409 22:03:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:44.976 00:03:44.976 real 0m1.573s 00:03:44.976 user 0m0.688s 00:03:44.976 sys 0m0.892s 00:03:44.976 22:03:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:44.976 22:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:44.976 ************************************ 00:03:44.976 END TEST allowed 00:03:44.976 ************************************ 00:03:44.976 00:03:44.976 real 0m4.514s 00:03:44.976 user 0m1.950s 00:03:44.976 sys 0m2.523s 00:03:44.976 22:03:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:44.976 22:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:44.976 ************************************ 00:03:44.976 END TEST acl 00:03:44.976 ************************************ 00:03:44.976 22:03:41 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:44.976 22:03:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.976 22:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.976 22:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:44.976 ************************************ 00:03:44.976 START TEST hugepages 00:03:44.976 ************************************ 00:03:44.976 22:03:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:45.235 * Looking for test storage... 00:03:45.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:45.235 22:03:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:45.235 22:03:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:45.235 22:03:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:45.235 22:03:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:45.235 22:03:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:45.235 22:03:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:45.235 22:03:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:45.235 22:03:41 -- scripts/common.sh@335 -- # IFS=.-: 00:03:45.235 22:03:41 -- scripts/common.sh@335 -- # read -ra ver1 00:03:45.235 22:03:41 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.235 22:03:41 -- scripts/common.sh@336 -- # read -ra ver2 00:03:45.235 22:03:41 -- scripts/common.sh@337 -- # local 'op=<' 00:03:45.235 22:03:41 -- scripts/common.sh@339 -- # ver1_l=2 00:03:45.235 22:03:41 -- scripts/common.sh@340 -- # ver2_l=1 00:03:45.235 22:03:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:45.235 22:03:41 -- scripts/common.sh@343 -- # case "$op" in 00:03:45.235 22:03:41 -- scripts/common.sh@344 -- # : 1 00:03:45.235 22:03:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:45.235 22:03:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.235 22:03:41 -- scripts/common.sh@364 -- # decimal 1 00:03:45.235 22:03:41 -- scripts/common.sh@352 -- # local d=1 00:03:45.235 22:03:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.235 22:03:41 -- scripts/common.sh@354 -- # echo 1 00:03:45.235 22:03:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:45.235 22:03:41 -- scripts/common.sh@365 -- # decimal 2 00:03:45.235 22:03:41 -- scripts/common.sh@352 -- # local d=2 00:03:45.235 22:03:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.235 22:03:41 -- scripts/common.sh@354 -- # echo 2 00:03:45.235 22:03:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:45.235 22:03:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:45.235 22:03:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:45.235 22:03:41 -- scripts/common.sh@367 -- # return 0 00:03:45.235 22:03:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.235 22:03:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:45.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.235 --rc genhtml_branch_coverage=1 00:03:45.235 --rc genhtml_function_coverage=1 00:03:45.235 --rc genhtml_legend=1 00:03:45.235 --rc geninfo_all_blocks=1 00:03:45.235 --rc geninfo_unexecuted_blocks=1 00:03:45.235 00:03:45.235 ' 00:03:45.235 22:03:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:45.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.235 --rc genhtml_branch_coverage=1 00:03:45.235 --rc genhtml_function_coverage=1 00:03:45.235 --rc genhtml_legend=1 00:03:45.235 --rc geninfo_all_blocks=1 00:03:45.235 --rc geninfo_unexecuted_blocks=1 00:03:45.235 00:03:45.235 ' 00:03:45.235 22:03:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:45.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.236 --rc genhtml_branch_coverage=1 00:03:45.236 --rc genhtml_function_coverage=1 00:03:45.236 --rc genhtml_legend=1 00:03:45.236 --rc geninfo_all_blocks=1 00:03:45.236 --rc geninfo_unexecuted_blocks=1 00:03:45.236 00:03:45.236 ' 00:03:45.236 22:03:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:45.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.236 --rc genhtml_branch_coverage=1 00:03:45.236 --rc genhtml_function_coverage=1 00:03:45.236 --rc genhtml_legend=1 00:03:45.236 --rc geninfo_all_blocks=1 00:03:45.236 --rc geninfo_unexecuted_blocks=1 00:03:45.236 00:03:45.236 ' 00:03:45.236 22:03:41 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.236 22:03:41 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.236 22:03:41 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.236 22:03:41 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.236 22:03:41 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.236 22:03:41 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.236 22:03:41 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.236 22:03:41 -- setup/common.sh@18 -- # local node= 00:03:45.236 22:03:41 -- setup/common.sh@19 -- # local var val 00:03:45.236 22:03:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.236 22:03:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.236 22:03:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.236 22:03:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.236 22:03:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.236 22:03:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 5858340 kB' 'MemAvailable: 7369816 kB' 'Buffers: 2684 kB' 'Cached: 1722272 kB' 'SwapCached: 0 kB' 'Active: 496096 kB' 'Inactive: 1345260 kB' 'Active(anon): 126908 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345260 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 118288 kB' 'Mapped: 50788 kB' 'Shmem: 10508 kB' 'KReclaimable: 67948 kB' 'Slab: 163324 kB' 'SReclaimable: 67948 kB' 'SUnreclaim: 95376 kB' 'KernelStack: 6464 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 317372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.236 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.236 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # continue 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.237 22:03:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.237 22:03:41 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.237 22:03:41 -- setup/common.sh@33 -- # echo 2048 00:03:45.237 22:03:41 -- setup/common.sh@33 -- # return 0 00:03:45.237 22:03:41 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:45.237 22:03:41 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:45.237 22:03:41 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:45.237 22:03:41 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:45.237 22:03:41 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:45.237 22:03:41 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:45.237 22:03:41 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:45.237 22:03:41 -- setup/hugepages.sh@207 -- # get_nodes 00:03:45.237 22:03:41 -- setup/hugepages.sh@27 -- # local node 00:03:45.237 22:03:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.237 22:03:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:45.237 22:03:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.237 22:03:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.237 22:03:41 -- setup/hugepages.sh@208 -- # clear_hp 00:03:45.237 22:03:41 -- setup/hugepages.sh@37 -- # local node hp 00:03:45.237 22:03:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.237 22:03:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.237 22:03:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.237 22:03:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.237 22:03:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.237 22:03:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.237 22:03:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.237 22:03:41 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:45.237 22:03:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.237 22:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.237 22:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:45.237 ************************************ 00:03:45.237 START TEST default_setup 00:03:45.237 ************************************ 00:03:45.237 22:03:41 -- common/autotest_common.sh@1114 -- # default_setup 00:03:45.237 22:03:41 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:45.237 22:03:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.237 22:03:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.237 22:03:41 -- setup/hugepages.sh@51 -- # shift 00:03:45.237 22:03:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.237 22:03:41 -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.237 22:03:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.237 22:03:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.237 22:03:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.237 22:03:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.237 22:03:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.237 22:03:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.237 22:03:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.237 22:03:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.237 22:03:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.237 22:03:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.237 22:03:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.237 22:03:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.237 22:03:41 -- setup/hugepages.sh@73 -- # return 0 00:03:45.237 22:03:41 -- setup/hugepages.sh@137 -- # setup output 00:03:45.237 22:03:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.237 22:03:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.178 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.178 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:46.178 22:03:42 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:46.178 22:03:42 -- setup/hugepages.sh@89 -- # local node 00:03:46.178 22:03:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.178 22:03:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.178 22:03:42 -- setup/hugepages.sh@92 -- # local surp 00:03:46.178 22:03:42 -- setup/hugepages.sh@93 -- # local resv 00:03:46.178 22:03:42 -- setup/hugepages.sh@94 -- # local anon 00:03:46.178 22:03:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.178 22:03:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.178 22:03:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.178 22:03:42 -- setup/common.sh@18 -- # local node= 00:03:46.178 22:03:42 -- setup/common.sh@19 -- # local var val 00:03:46.178 22:03:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.178 22:03:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.178 22:03:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.178 22:03:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.178 22:03:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.178 22:03:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.178 22:03:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7889304 kB' 'MemAvailable: 9400616 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 497936 kB' 'Inactive: 1345268 kB' 'Active(anon): 128748 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345268 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119868 kB' 'Mapped: 50884 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163004 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95396 kB' 'KernelStack: 6480 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:46.178 22:03:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.178 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.178 22:03:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.178 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.178 22:03:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.178 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.178 22:03:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.178 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.178 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.178 22:03:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.179 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.179 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.180 22:03:42 -- setup/common.sh@33 -- # echo 0 00:03:46.180 22:03:42 -- setup/common.sh@33 -- # return 0 00:03:46.180 22:03:42 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.180 22:03:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.180 22:03:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.180 22:03:42 -- setup/common.sh@18 -- # local node= 00:03:46.180 22:03:42 -- setup/common.sh@19 -- # local var val 00:03:46.180 22:03:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.180 22:03:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.180 22:03:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.180 22:03:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.180 22:03:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.180 22:03:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7888060 kB' 'MemAvailable: 9399376 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 497972 kB' 'Inactive: 1345272 kB' 'Active(anon): 128784 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119884 kB' 'Mapped: 50884 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163004 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95396 kB' 'KernelStack: 6448 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.180 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.180 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.181 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.181 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.182 22:03:42 -- setup/common.sh@33 -- # echo 0 00:03:46.182 22:03:42 -- setup/common.sh@33 -- # return 0 00:03:46.182 22:03:42 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.182 22:03:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.182 22:03:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.182 22:03:42 -- setup/common.sh@18 -- # local node= 00:03:46.182 22:03:42 -- setup/common.sh@19 -- # local var val 00:03:46.182 22:03:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.182 22:03:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.182 22:03:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.182 22:03:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.182 22:03:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.182 22:03:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7887808 kB' 'MemAvailable: 9399124 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 497484 kB' 'Inactive: 1345272 kB' 'Active(anon): 128296 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119432 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 162996 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95388 kB' 'KernelStack: 6464 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.182 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.182 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.183 22:03:42 -- setup/common.sh@33 -- # echo 0 00:03:46.183 22:03:42 -- setup/common.sh@33 -- # return 0 00:03:46.183 22:03:42 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.183 nr_hugepages=1024 00:03:46.183 22:03:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.183 resv_hugepages=0 00:03:46.183 22:03:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.183 surplus_hugepages=0 00:03:46.183 22:03:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.183 anon_hugepages=0 00:03:46.183 22:03:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.183 22:03:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.183 22:03:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.183 22:03:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.183 22:03:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.183 22:03:42 -- setup/common.sh@18 -- # local node= 00:03:46.183 22:03:42 -- setup/common.sh@19 -- # local var val 00:03:46.183 22:03:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.183 22:03:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.183 22:03:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.183 22:03:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.183 22:03:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.183 22:03:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.183 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.183 22:03:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7887808 kB' 'MemAvailable: 9399124 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 497484 kB' 'Inactive: 1345272 kB' 'Active(anon): 128296 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119432 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 162996 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95388 kB' 'KernelStack: 6464 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.184 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.184 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.185 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.185 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.445 22:03:42 -- setup/common.sh@33 -- # echo 1024 00:03:46.445 22:03:42 -- setup/common.sh@33 -- # return 0 00:03:46.445 22:03:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.445 22:03:42 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.445 22:03:42 -- setup/hugepages.sh@27 -- # local node 00:03:46.445 22:03:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.445 22:03:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.445 22:03:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.445 22:03:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.445 22:03:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.445 22:03:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.445 22:03:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.445 22:03:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.445 22:03:42 -- setup/common.sh@18 -- # local node=0 00:03:46.445 22:03:42 -- setup/common.sh@19 -- # local var val 00:03:46.445 22:03:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.445 22:03:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.445 22:03:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.445 22:03:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.445 22:03:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.445 22:03:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7888476 kB' 'MemUsed: 4350644 kB' 'SwapCached: 0 kB' 'Active: 497644 kB' 'Inactive: 1345272 kB' 'Active(anon): 128456 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 1724944 kB' 'Mapped: 50788 kB' 'AnonPages: 119592 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67608 kB' 'Slab: 162996 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.445 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.445 22:03:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # continue 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.446 22:03:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.446 22:03:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.446 22:03:42 -- setup/common.sh@33 -- # echo 0 00:03:46.446 22:03:42 -- setup/common.sh@33 -- # return 0 00:03:46.446 22:03:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.446 22:03:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.446 22:03:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.446 22:03:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.446 node0=1024 expecting 1024 00:03:46.446 22:03:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.446 22:03:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.446 00:03:46.446 real 0m1.000s 00:03:46.446 user 0m0.485s 00:03:46.446 sys 0m0.471s 00:03:46.446 22:03:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:46.446 22:03:42 -- common/autotest_common.sh@10 -- # set +x 00:03:46.446 ************************************ 00:03:46.446 END TEST default_setup 00:03:46.446 ************************************ 00:03:46.446 22:03:42 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:46.446 22:03:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.446 22:03:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.446 22:03:42 -- common/autotest_common.sh@10 -- # set +x 00:03:46.446 ************************************ 00:03:46.446 START TEST per_node_1G_alloc 00:03:46.446 ************************************ 00:03:46.446 22:03:42 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:46.446 22:03:42 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:46.446 22:03:42 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:46.446 22:03:42 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.446 22:03:42 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.446 22:03:42 -- setup/hugepages.sh@51 -- # shift 00:03:46.446 22:03:42 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.446 22:03:42 -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.446 22:03:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.446 22:03:42 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.446 22:03:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.446 22:03:42 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.446 22:03:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.446 22:03:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.446 22:03:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.446 22:03:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.446 22:03:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.446 22:03:42 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.446 22:03:42 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.446 22:03:42 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.446 22:03:42 -- setup/hugepages.sh@73 -- # return 0 00:03:46.446 22:03:42 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:46.446 22:03:42 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:46.446 22:03:42 -- setup/hugepages.sh@146 -- # setup output 00:03:46.446 22:03:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.446 22:03:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.707 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.707 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.707 22:03:43 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:46.707 22:03:43 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:46.707 22:03:43 -- setup/hugepages.sh@89 -- # local node 00:03:46.707 22:03:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.707 22:03:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.707 22:03:43 -- setup/hugepages.sh@92 -- # local surp 00:03:46.707 22:03:43 -- setup/hugepages.sh@93 -- # local resv 00:03:46.707 22:03:43 -- setup/hugepages.sh@94 -- # local anon 00:03:46.707 22:03:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.707 22:03:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.707 22:03:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.707 22:03:43 -- setup/common.sh@18 -- # local node= 00:03:46.707 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:46.707 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.707 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.707 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.707 22:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.707 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.707 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.707 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8934116 kB' 'MemAvailable: 10445432 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 498112 kB' 'Inactive: 1345272 kB' 'Active(anon): 128924 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 120092 kB' 'Mapped: 50876 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163000 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95392 kB' 'KernelStack: 6456 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:46.707 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.707 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.707 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.707 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.707 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 22:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.707 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.707 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.707 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.707 22:03:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.708 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.708 22:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.708 22:03:43 -- setup/common.sh@33 -- # echo 0 00:03:46.708 22:03:43 -- setup/common.sh@33 -- # return 0 00:03:46.708 22:03:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.708 22:03:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.708 22:03:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.708 22:03:43 -- setup/common.sh@18 -- # local node= 00:03:46.708 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:46.708 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.708 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.708 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.708 22:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.708 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.708 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.709 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8934116 kB' 'MemAvailable: 10445432 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 497920 kB' 'Inactive: 1345272 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119836 kB' 'Mapped: 50876 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 162996 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95388 kB' 'KernelStack: 6440 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.709 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.709 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.710 22:03:43 -- setup/common.sh@33 -- # echo 0 00:03:46.710 22:03:43 -- setup/common.sh@33 -- # return 0 00:03:46.710 22:03:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.710 22:03:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.710 22:03:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.710 22:03:43 -- setup/common.sh@18 -- # local node= 00:03:46.710 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:46.710 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.710 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.710 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.710 22:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.710 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.710 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8934396 kB' 'MemAvailable: 10445712 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 497480 kB' 'Inactive: 1345272 kB' 'Active(anon): 128292 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119368 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163028 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95420 kB' 'KernelStack: 6448 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.710 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.710 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.972 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.972 22:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.973 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.973 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.974 22:03:43 -- setup/common.sh@33 -- # echo 0 00:03:46.974 22:03:43 -- setup/common.sh@33 -- # return 0 00:03:46.974 22:03:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.974 nr_hugepages=512 00:03:46.974 22:03:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:46.974 resv_hugepages=0 00:03:46.974 22:03:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.974 surplus_hugepages=0 00:03:46.974 22:03:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.974 anon_hugepages=0 00:03:46.974 22:03:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.974 22:03:43 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:46.974 22:03:43 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:46.974 22:03:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.974 22:03:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.974 22:03:43 -- setup/common.sh@18 -- # local node= 00:03:46.974 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:46.974 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.974 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.974 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.974 22:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.974 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.974 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8934856 kB' 'MemAvailable: 10446172 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 497672 kB' 'Inactive: 1345272 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119560 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163024 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95416 kB' 'KernelStack: 6432 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.974 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.974 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.975 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.975 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.975 22:03:43 -- setup/common.sh@33 -- # echo 512 00:03:46.975 22:03:43 -- setup/common.sh@33 -- # return 0 00:03:46.975 22:03:43 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:46.975 22:03:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.975 22:03:43 -- setup/hugepages.sh@27 -- # local node 00:03:46.975 22:03:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.975 22:03:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.975 22:03:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.975 22:03:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.975 22:03:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.975 22:03:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.975 22:03:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.975 22:03:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.975 22:03:43 -- setup/common.sh@18 -- # local node=0 00:03:46.975 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:46.975 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.976 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.976 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.976 22:03:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.976 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.976 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8934856 kB' 'MemUsed: 3304264 kB' 'SwapCached: 0 kB' 'Active: 497428 kB' 'Inactive: 1345272 kB' 'Active(anon): 128240 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 1724944 kB' 'Mapped: 51048 kB' 'AnonPages: 119336 kB' 'Shmem: 10484 kB' 'KernelStack: 6480 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67608 kB' 'Slab: 163012 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.976 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.976 22:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # continue 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.977 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.977 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.977 22:03:43 -- setup/common.sh@33 -- # echo 0 00:03:46.977 22:03:43 -- setup/common.sh@33 -- # return 0 00:03:46.977 22:03:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.977 22:03:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.977 22:03:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.977 22:03:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.977 node0=512 expecting 512 00:03:46.977 22:03:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.977 22:03:43 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:46.977 00:03:46.977 real 0m0.526s 00:03:46.977 user 0m0.257s 00:03:46.977 sys 0m0.299s 00:03:46.977 22:03:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:46.977 22:03:43 -- common/autotest_common.sh@10 -- # set +x 00:03:46.977 ************************************ 00:03:46.977 END TEST per_node_1G_alloc 00:03:46.977 ************************************ 00:03:46.977 22:03:43 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:46.977 22:03:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.977 22:03:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.977 22:03:43 -- common/autotest_common.sh@10 -- # set +x 00:03:46.977 ************************************ 00:03:46.977 START TEST even_2G_alloc 00:03:46.977 ************************************ 00:03:46.977 22:03:43 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:46.977 22:03:43 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:46.977 22:03:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.977 22:03:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.977 22:03:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.977 22:03:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.977 22:03:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.977 22:03:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.977 22:03:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.977 22:03:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.977 22:03:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.977 22:03:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.977 22:03:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.977 22:03:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.977 22:03:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.977 22:03:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.977 22:03:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:46.977 22:03:43 -- setup/hugepages.sh@83 -- # : 0 00:03:46.977 22:03:43 -- setup/hugepages.sh@84 -- # : 0 00:03:46.977 22:03:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.977 22:03:43 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:46.977 22:03:43 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:46.977 22:03:43 -- setup/hugepages.sh@153 -- # setup output 00:03:46.977 22:03:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.977 22:03:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.236 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.236 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.236 22:03:43 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:47.236 22:03:43 -- setup/hugepages.sh@89 -- # local node 00:03:47.236 22:03:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.236 22:03:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.236 22:03:43 -- setup/hugepages.sh@92 -- # local surp 00:03:47.236 22:03:43 -- setup/hugepages.sh@93 -- # local resv 00:03:47.236 22:03:43 -- setup/hugepages.sh@94 -- # local anon 00:03:47.236 22:03:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.236 22:03:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.498 22:03:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.498 22:03:43 -- setup/common.sh@18 -- # local node= 00:03:47.498 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:47.499 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.499 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.499 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.499 22:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.499 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.499 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7903136 kB' 'MemAvailable: 9414452 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 498048 kB' 'Inactive: 1345272 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119956 kB' 'Mapped: 50888 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163012 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95404 kB' 'KernelStack: 6456 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.499 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.499 22:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.500 22:03:43 -- setup/common.sh@33 -- # echo 0 00:03:47.500 22:03:43 -- setup/common.sh@33 -- # return 0 00:03:47.500 22:03:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.500 22:03:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.500 22:03:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.500 22:03:43 -- setup/common.sh@18 -- # local node= 00:03:47.500 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:47.500 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.500 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.500 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.500 22:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.500 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.500 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7903136 kB' 'MemAvailable: 9414452 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 498040 kB' 'Inactive: 1345272 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119948 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163048 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95440 kB' 'KernelStack: 6480 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.500 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.500 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.501 22:03:43 -- setup/common.sh@33 -- # echo 0 00:03:47.501 22:03:43 -- setup/common.sh@33 -- # return 0 00:03:47.501 22:03:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.501 22:03:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.501 22:03:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.501 22:03:43 -- setup/common.sh@18 -- # local node= 00:03:47.501 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:47.501 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.501 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.501 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.501 22:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.501 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.501 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7903136 kB' 'MemAvailable: 9414452 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 497796 kB' 'Inactive: 1345272 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119736 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163048 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95440 kB' 'KernelStack: 6464 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.501 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.501 22:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.502 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.502 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.503 22:03:43 -- setup/common.sh@33 -- # echo 0 00:03:47.503 22:03:43 -- setup/common.sh@33 -- # return 0 00:03:47.503 22:03:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:47.503 nr_hugepages=1024 00:03:47.503 22:03:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.503 resv_hugepages=0 00:03:47.503 22:03:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.503 surplus_hugepages=0 00:03:47.503 22:03:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.503 anon_hugepages=0 00:03:47.503 22:03:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.503 22:03:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.503 22:03:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.503 22:03:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.503 22:03:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.503 22:03:43 -- setup/common.sh@18 -- # local node= 00:03:47.503 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:47.503 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.503 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.503 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.503 22:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.503 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.503 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7903396 kB' 'MemAvailable: 9414712 kB' 'Buffers: 2684 kB' 'Cached: 1722260 kB' 'SwapCached: 0 kB' 'Active: 498064 kB' 'Inactive: 1345272 kB' 'Active(anon): 128876 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119968 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163052 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95444 kB' 'KernelStack: 6464 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 320804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.503 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.503 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.504 22:03:43 -- setup/common.sh@33 -- # echo 1024 00:03:47.504 22:03:43 -- setup/common.sh@33 -- # return 0 00:03:47.504 22:03:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.504 22:03:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.504 22:03:43 -- setup/hugepages.sh@27 -- # local node 00:03:47.504 22:03:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.504 22:03:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.504 22:03:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.504 22:03:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.504 22:03:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.504 22:03:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.504 22:03:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.504 22:03:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.504 22:03:43 -- setup/common.sh@18 -- # local node=0 00:03:47.504 22:03:43 -- setup/common.sh@19 -- # local var val 00:03:47.504 22:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.504 22:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.504 22:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.504 22:03:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.504 22:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.504 22:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7903396 kB' 'MemUsed: 4335724 kB' 'SwapCached: 0 kB' 'Active: 497544 kB' 'Inactive: 1345272 kB' 'Active(anon): 128356 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345272 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 1724944 kB' 'Mapped: 50788 kB' 'AnonPages: 119528 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67608 kB' 'Slab: 163044 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.504 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.504 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:43 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # continue 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.505 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.505 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.505 22:03:44 -- setup/common.sh@33 -- # echo 0 00:03:47.505 22:03:44 -- setup/common.sh@33 -- # return 0 00:03:47.505 node0=1024 expecting 1024 00:03:47.505 ************************************ 00:03:47.505 END TEST even_2G_alloc 00:03:47.505 ************************************ 00:03:47.505 22:03:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.505 22:03:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.505 22:03:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.505 22:03:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.505 22:03:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.505 22:03:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.505 00:03:47.505 real 0m0.570s 00:03:47.505 user 0m0.289s 00:03:47.505 sys 0m0.314s 00:03:47.505 22:03:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.505 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:47.505 22:03:44 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:47.505 22:03:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.505 22:03:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.505 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:47.505 ************************************ 00:03:47.505 START TEST odd_alloc 00:03:47.505 ************************************ 00:03:47.505 22:03:44 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:47.505 22:03:44 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:47.505 22:03:44 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:47.505 22:03:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.505 22:03:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.505 22:03:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:47.505 22:03:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.505 22:03:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.505 22:03:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.505 22:03:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:47.505 22:03:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.505 22:03:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.506 22:03:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.506 22:03:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.506 22:03:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.506 22:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.506 22:03:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:47.506 22:03:44 -- setup/hugepages.sh@83 -- # : 0 00:03:47.506 22:03:44 -- setup/hugepages.sh@84 -- # : 0 00:03:47.506 22:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.506 22:03:44 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:47.506 22:03:44 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:47.506 22:03:44 -- setup/hugepages.sh@160 -- # setup output 00:03:47.506 22:03:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.506 22:03:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.080 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.080 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.080 22:03:44 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:48.080 22:03:44 -- setup/hugepages.sh@89 -- # local node 00:03:48.080 22:03:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.080 22:03:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.080 22:03:44 -- setup/hugepages.sh@92 -- # local surp 00:03:48.080 22:03:44 -- setup/hugepages.sh@93 -- # local resv 00:03:48.080 22:03:44 -- setup/hugepages.sh@94 -- # local anon 00:03:48.080 22:03:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.080 22:03:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.080 22:03:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.080 22:03:44 -- setup/common.sh@18 -- # local node= 00:03:48.080 22:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.080 22:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.080 22:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.080 22:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.080 22:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.080 22:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.080 22:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.080 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.080 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.080 22:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7912588 kB' 'MemAvailable: 9423908 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 497984 kB' 'Inactive: 1345276 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119944 kB' 'Mapped: 50860 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163060 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95452 kB' 'KernelStack: 6424 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:48.080 22:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.080 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.080 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.080 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.080 22:03:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.080 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.080 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.080 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.080 22:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.080 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.080 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.080 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.080 22:03:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.081 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.081 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.082 22:03:44 -- setup/common.sh@33 -- # echo 0 00:03:48.082 22:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.082 22:03:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.082 22:03:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.082 22:03:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.082 22:03:44 -- setup/common.sh@18 -- # local node= 00:03:48.082 22:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.082 22:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.082 22:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.082 22:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.082 22:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.082 22:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.082 22:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7912224 kB' 'MemAvailable: 9423544 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 497796 kB' 'Inactive: 1345276 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119816 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163064 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95456 kB' 'KernelStack: 6464 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.082 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.082 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.083 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.083 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.083 22:03:44 -- setup/common.sh@33 -- # echo 0 00:03:48.083 22:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.083 22:03:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.083 22:03:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.084 22:03:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.084 22:03:44 -- setup/common.sh@18 -- # local node= 00:03:48.084 22:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.084 22:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.084 22:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.084 22:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.084 22:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.084 22:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.084 22:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7912224 kB' 'MemAvailable: 9423544 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 498008 kB' 'Inactive: 1345276 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119720 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163060 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95452 kB' 'KernelStack: 6464 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.084 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.084 22:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.085 22:03:44 -- setup/common.sh@33 -- # echo 0 00:03:48.085 22:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.085 22:03:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.085 nr_hugepages=1025 00:03:48.085 22:03:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.085 resv_hugepages=0 00:03:48.085 22:03:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.085 surplus_hugepages=0 00:03:48.085 22:03:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.085 anon_hugepages=0 00:03:48.085 22:03:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.085 22:03:44 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.085 22:03:44 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.085 22:03:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.085 22:03:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.085 22:03:44 -- setup/common.sh@18 -- # local node= 00:03:48.085 22:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.085 22:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.085 22:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.085 22:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.085 22:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.085 22:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.085 22:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7912476 kB' 'MemAvailable: 9423796 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 497780 kB' 'Inactive: 1345276 kB' 'Active(anon): 128592 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163052 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95444 kB' 'KernelStack: 6464 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.085 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.085 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.086 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.086 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.087 22:03:44 -- setup/common.sh@33 -- # echo 1025 00:03:48.087 22:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.087 22:03:44 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.087 22:03:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.087 22:03:44 -- setup/hugepages.sh@27 -- # local node 00:03:48.087 22:03:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.087 22:03:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:48.087 22:03:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.087 22:03:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.087 22:03:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.087 22:03:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.087 22:03:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.087 22:03:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.087 22:03:44 -- setup/common.sh@18 -- # local node=0 00:03:48.087 22:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.087 22:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.087 22:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.087 22:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.087 22:03:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.087 22:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.087 22:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7912476 kB' 'MemUsed: 4326644 kB' 'SwapCached: 0 kB' 'Active: 497820 kB' 'Inactive: 1345276 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724948 kB' 'Mapped: 50788 kB' 'AnonPages: 119744 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67608 kB' 'Slab: 163048 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.087 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.087 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # continue 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.088 22:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.088 22:03:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.088 22:03:44 -- setup/common.sh@33 -- # echo 0 00:03:48.088 22:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.088 22:03:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.088 22:03:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.088 22:03:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.088 22:03:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.088 node0=1025 expecting 1025 00:03:48.088 22:03:44 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:48.088 22:03:44 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:48.088 00:03:48.088 real 0m0.569s 00:03:48.088 user 0m0.271s 00:03:48.088 sys 0m0.324s 00:03:48.088 22:03:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.088 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.088 ************************************ 00:03:48.088 END TEST odd_alloc 00:03:48.088 ************************************ 00:03:48.088 22:03:44 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:48.088 22:03:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.088 22:03:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.088 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.358 ************************************ 00:03:48.358 START TEST custom_alloc 00:03:48.358 ************************************ 00:03:48.358 22:03:44 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:48.358 22:03:44 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:48.358 22:03:44 -- setup/hugepages.sh@169 -- # local node 00:03:48.358 22:03:44 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:48.358 22:03:44 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:48.358 22:03:44 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:48.358 22:03:44 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:48.358 22:03:44 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.358 22:03:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.358 22:03:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.358 22:03:44 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.358 22:03:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.358 22:03:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.358 22:03:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.358 22:03:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.358 22:03:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.358 22:03:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.358 22:03:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.358 22:03:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.358 22:03:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.358 22:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.358 22:03:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.358 22:03:44 -- setup/hugepages.sh@83 -- # : 0 00:03:48.358 22:03:44 -- setup/hugepages.sh@84 -- # : 0 00:03:48.358 22:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.358 22:03:44 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:48.358 22:03:44 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:48.358 22:03:44 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.358 22:03:44 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.358 22:03:44 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.358 22:03:44 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:48.358 22:03:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.358 22:03:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.358 22:03:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.358 22:03:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.358 22:03:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.358 22:03:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.358 22:03:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.359 22:03:44 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:48.359 22:03:44 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.359 22:03:44 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.359 22:03:44 -- setup/hugepages.sh@78 -- # return 0 00:03:48.359 22:03:44 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:48.359 22:03:44 -- setup/hugepages.sh@187 -- # setup output 00:03:48.359 22:03:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.359 22:03:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.621 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.621 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.621 22:03:45 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:48.621 22:03:45 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:48.621 22:03:45 -- setup/hugepages.sh@89 -- # local node 00:03:48.621 22:03:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.621 22:03:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.621 22:03:45 -- setup/hugepages.sh@92 -- # local surp 00:03:48.621 22:03:45 -- setup/hugepages.sh@93 -- # local resv 00:03:48.621 22:03:45 -- setup/hugepages.sh@94 -- # local anon 00:03:48.621 22:03:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.621 22:03:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.621 22:03:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.621 22:03:45 -- setup/common.sh@18 -- # local node= 00:03:48.621 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:48.621 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.621 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.621 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.621 22:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.621 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.621 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.621 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9029332 kB' 'MemAvailable: 10540652 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 498076 kB' 'Inactive: 1345276 kB' 'Active(anon): 128888 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119796 kB' 'Mapped: 50872 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163044 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95436 kB' 'KernelStack: 6456 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.622 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.623 22:03:45 -- setup/common.sh@33 -- # echo 0 00:03:48.623 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:48.623 22:03:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.623 22:03:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.623 22:03:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.623 22:03:45 -- setup/common.sh@18 -- # local node= 00:03:48.623 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:48.623 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.623 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.623 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.623 22:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.623 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.623 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9029332 kB' 'MemAvailable: 10540652 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 497968 kB' 'Inactive: 1345276 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119912 kB' 'Mapped: 50748 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163068 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95460 kB' 'KernelStack: 6480 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.623 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.624 22:03:45 -- setup/common.sh@33 -- # echo 0 00:03:48.624 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:48.624 22:03:45 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.624 22:03:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.624 22:03:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.624 22:03:45 -- setup/common.sh@18 -- # local node= 00:03:48.624 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:48.624 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.624 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.624 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.624 22:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.624 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.624 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9029332 kB' 'MemAvailable: 10540652 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 497516 kB' 'Inactive: 1345276 kB' 'Active(anon): 128328 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119700 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163060 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95452 kB' 'KernelStack: 6464 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.624 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.624 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.625 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.625 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.625 22:03:45 -- setup/common.sh@33 -- # echo 0 00:03:48.625 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:48.625 22:03:45 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.625 nr_hugepages=512 00:03:48.626 22:03:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:48.626 resv_hugepages=0 00:03:48.626 22:03:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.626 surplus_hugepages=0 00:03:48.626 22:03:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.626 anon_hugepages=0 00:03:48.626 22:03:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.626 22:03:45 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:48.626 22:03:45 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:48.626 22:03:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.626 22:03:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.626 22:03:45 -- setup/common.sh@18 -- # local node= 00:03:48.626 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:48.626 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.626 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.626 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.626 22:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.626 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.626 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9029332 kB' 'MemAvailable: 10540652 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 497732 kB' 'Inactive: 1345276 kB' 'Active(anon): 128544 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163060 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95452 kB' 'KernelStack: 6448 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 318484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.626 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.626 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.627 22:03:45 -- setup/common.sh@33 -- # echo 512 00:03:48.627 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:48.627 22:03:45 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:48.627 22:03:45 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.627 22:03:45 -- setup/hugepages.sh@27 -- # local node 00:03:48.627 22:03:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.627 22:03:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.627 22:03:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.627 22:03:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.627 22:03:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.627 22:03:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.627 22:03:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.627 22:03:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.627 22:03:45 -- setup/common.sh@18 -- # local node=0 00:03:48.627 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:48.627 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.627 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.627 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.627 22:03:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.627 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.627 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9029332 kB' 'MemUsed: 3209788 kB' 'SwapCached: 0 kB' 'Active: 497660 kB' 'Inactive: 1345276 kB' 'Active(anon): 128472 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724948 kB' 'Mapped: 50788 kB' 'AnonPages: 119588 kB' 'Shmem: 10484 kB' 'KernelStack: 6416 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67608 kB' 'Slab: 163060 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.627 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.627 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.628 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.628 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # continue 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 22:03:45 -- setup/common.sh@33 -- # echo 0 00:03:48.888 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:48.888 22:03:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.888 22:03:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.888 22:03:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.888 22:03:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.888 node0=512 expecting 512 00:03:48.888 22:03:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.888 22:03:45 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.888 00:03:48.888 real 0m0.553s 00:03:48.888 user 0m0.273s 00:03:48.888 sys 0m0.317s 00:03:48.888 22:03:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.888 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:03:48.889 ************************************ 00:03:48.889 END TEST custom_alloc 00:03:48.889 ************************************ 00:03:48.889 22:03:45 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:48.889 22:03:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.889 22:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.889 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:03:48.889 ************************************ 00:03:48.889 START TEST no_shrink_alloc 00:03:48.889 ************************************ 00:03:48.889 22:03:45 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:48.889 22:03:45 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:48.889 22:03:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.889 22:03:45 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.889 22:03:45 -- setup/hugepages.sh@51 -- # shift 00:03:48.889 22:03:45 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.889 22:03:45 -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.889 22:03:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.889 22:03:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.889 22:03:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.889 22:03:45 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.889 22:03:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.889 22:03:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.889 22:03:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:48.889 22:03:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.889 22:03:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.889 22:03:45 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.889 22:03:45 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.889 22:03:45 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:48.889 22:03:45 -- setup/hugepages.sh@73 -- # return 0 00:03:48.889 22:03:45 -- setup/hugepages.sh@198 -- # setup output 00:03:48.889 22:03:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.889 22:03:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.151 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.151 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.151 22:03:45 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:49.151 22:03:45 -- setup/hugepages.sh@89 -- # local node 00:03:49.151 22:03:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.151 22:03:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.151 22:03:45 -- setup/hugepages.sh@92 -- # local surp 00:03:49.151 22:03:45 -- setup/hugepages.sh@93 -- # local resv 00:03:49.151 22:03:45 -- setup/hugepages.sh@94 -- # local anon 00:03:49.151 22:03:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.151 22:03:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.151 22:03:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.151 22:03:45 -- setup/common.sh@18 -- # local node= 00:03:49.151 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:49.151 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.151 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.151 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.151 22:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.151 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.151 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7975688 kB' 'MemAvailable: 9487008 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 498160 kB' 'Inactive: 1345276 kB' 'Active(anon): 128972 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120156 kB' 'Mapped: 50892 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163076 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95468 kB' 'KernelStack: 6452 kB' 'PageTables: 4672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.151 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.151 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.152 22:03:45 -- setup/common.sh@33 -- # echo 0 00:03:49.152 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:49.152 22:03:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:49.152 22:03:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.152 22:03:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.152 22:03:45 -- setup/common.sh@18 -- # local node= 00:03:49.152 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:49.152 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.152 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.152 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.152 22:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.152 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.152 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7975688 kB' 'MemAvailable: 9487008 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 497984 kB' 'Inactive: 1345276 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119704 kB' 'Mapped: 50892 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163076 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95468 kB' 'KernelStack: 6436 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.152 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.152 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.153 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.153 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.415 22:03:45 -- setup/common.sh@33 -- # echo 0 00:03:49.415 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:49.415 22:03:45 -- setup/hugepages.sh@99 -- # surp=0 00:03:49.415 22:03:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.415 22:03:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.415 22:03:45 -- setup/common.sh@18 -- # local node= 00:03:49.415 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:49.415 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.415 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.415 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.415 22:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.415 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.415 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7975688 kB' 'MemAvailable: 9487008 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 497988 kB' 'Inactive: 1345276 kB' 'Active(anon): 128800 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119964 kB' 'Mapped: 50892 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163072 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95464 kB' 'KernelStack: 6436 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.415 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.415 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.416 22:03:45 -- setup/common.sh@33 -- # echo 0 00:03:49.416 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:49.416 22:03:45 -- setup/hugepages.sh@100 -- # resv=0 00:03:49.416 22:03:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.416 nr_hugepages=1024 00:03:49.416 resv_hugepages=0 00:03:49.416 22:03:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.416 surplus_hugepages=0 00:03:49.416 22:03:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.416 anon_hugepages=0 00:03:49.416 22:03:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.416 22:03:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.416 22:03:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.416 22:03:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.416 22:03:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.416 22:03:45 -- setup/common.sh@18 -- # local node= 00:03:49.416 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:49.416 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.416 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.416 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.416 22:03:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.416 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.416 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7975688 kB' 'MemAvailable: 9487008 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 497924 kB' 'Inactive: 1345276 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119860 kB' 'Mapped: 50788 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 163084 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95476 kB' 'KernelStack: 6460 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 318684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.416 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.416 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.417 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.417 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.418 22:03:45 -- setup/common.sh@33 -- # echo 1024 00:03:49.418 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:49.418 22:03:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.418 22:03:45 -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.418 22:03:45 -- setup/hugepages.sh@27 -- # local node 00:03:49.418 22:03:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.418 22:03:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.418 22:03:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:49.418 22:03:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.418 22:03:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.418 22:03:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.418 22:03:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.418 22:03:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.418 22:03:45 -- setup/common.sh@18 -- # local node=0 00:03:49.418 22:03:45 -- setup/common.sh@19 -- # local var val 00:03:49.418 22:03:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.418 22:03:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.418 22:03:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.418 22:03:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.418 22:03:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.418 22:03:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7975688 kB' 'MemUsed: 4263432 kB' 'SwapCached: 0 kB' 'Active: 497672 kB' 'Inactive: 1345276 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724948 kB' 'Mapped: 50788 kB' 'AnonPages: 119604 kB' 'Shmem: 10484 kB' 'KernelStack: 6460 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67608 kB' 'Slab: 163080 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.418 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.418 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # continue 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.419 22:03:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.419 22:03:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.419 22:03:45 -- setup/common.sh@33 -- # echo 0 00:03:49.419 22:03:45 -- setup/common.sh@33 -- # return 0 00:03:49.419 22:03:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.419 22:03:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.419 22:03:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.419 22:03:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.419 node0=1024 expecting 1024 00:03:49.419 22:03:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.419 22:03:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.419 22:03:45 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:49.419 22:03:45 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:49.419 22:03:45 -- setup/hugepages.sh@202 -- # setup output 00:03:49.419 22:03:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.419 22:03:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:49.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.679 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.679 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:49.679 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:49.679 22:03:46 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:49.679 22:03:46 -- setup/hugepages.sh@89 -- # local node 00:03:49.679 22:03:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.679 22:03:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.679 22:03:46 -- setup/hugepages.sh@92 -- # local surp 00:03:49.679 22:03:46 -- setup/hugepages.sh@93 -- # local resv 00:03:49.679 22:03:46 -- setup/hugepages.sh@94 -- # local anon 00:03:49.679 22:03:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.679 22:03:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.679 22:03:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.679 22:03:46 -- setup/common.sh@18 -- # local node= 00:03:49.679 22:03:46 -- setup/common.sh@19 -- # local var val 00:03:49.679 22:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.679 22:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.679 22:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.679 22:03:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.679 22:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.679 22:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7986376 kB' 'MemAvailable: 9497696 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 495748 kB' 'Inactive: 1345276 kB' 'Active(anon): 126560 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117740 kB' 'Mapped: 50168 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 162952 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95344 kB' 'KernelStack: 6440 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 302848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.679 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.679 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.680 22:03:46 -- setup/common.sh@33 -- # echo 0 00:03:49.680 22:03:46 -- setup/common.sh@33 -- # return 0 00:03:49.680 22:03:46 -- setup/hugepages.sh@97 -- # anon=0 00:03:49.680 22:03:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.680 22:03:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.680 22:03:46 -- setup/common.sh@18 -- # local node= 00:03:49.680 22:03:46 -- setup/common.sh@19 -- # local var val 00:03:49.680 22:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.680 22:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.680 22:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.680 22:03:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.680 22:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.680 22:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7986376 kB' 'MemAvailable: 9497696 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 495472 kB' 'Inactive: 1345276 kB' 'Active(anon): 126284 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117492 kB' 'Mapped: 50220 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 162932 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95324 kB' 'KernelStack: 6408 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 302848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.680 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.680 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.681 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.681 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.942 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.942 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.942 22:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.942 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.942 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.942 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.942 22:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.942 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.943 22:03:46 -- setup/common.sh@33 -- # echo 0 00:03:49.943 22:03:46 -- setup/common.sh@33 -- # return 0 00:03:49.943 22:03:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:49.943 22:03:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.943 22:03:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.943 22:03:46 -- setup/common.sh@18 -- # local node= 00:03:49.943 22:03:46 -- setup/common.sh@19 -- # local var val 00:03:49.943 22:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.943 22:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.943 22:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.943 22:03:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.943 22:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.943 22:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7986376 kB' 'MemAvailable: 9497696 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 495152 kB' 'Inactive: 1345276 kB' 'Active(anon): 125964 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117124 kB' 'Mapped: 50220 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 162932 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95324 kB' 'KernelStack: 6376 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 302848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.943 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.943 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.944 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.944 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.945 22:03:46 -- setup/common.sh@33 -- # echo 0 00:03:49.945 22:03:46 -- setup/common.sh@33 -- # return 0 00:03:49.945 22:03:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:49.945 nr_hugepages=1024 00:03:49.945 22:03:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.945 resv_hugepages=0 00:03:49.945 22:03:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.945 surplus_hugepages=0 00:03:49.945 22:03:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.945 anon_hugepages=0 00:03:49.945 22:03:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.945 22:03:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.945 22:03:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.945 22:03:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.945 22:03:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.945 22:03:46 -- setup/common.sh@18 -- # local node= 00:03:49.945 22:03:46 -- setup/common.sh@19 -- # local var val 00:03:49.945 22:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.945 22:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.945 22:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.945 22:03:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.945 22:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.945 22:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7986376 kB' 'MemAvailable: 9497696 kB' 'Buffers: 2684 kB' 'Cached: 1722264 kB' 'SwapCached: 0 kB' 'Active: 495020 kB' 'Inactive: 1345276 kB' 'Active(anon): 125832 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117000 kB' 'Mapped: 50088 kB' 'Shmem: 10484 kB' 'KReclaimable: 67608 kB' 'Slab: 162932 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95324 kB' 'KernelStack: 6416 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 302848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 6088704 kB' 'DirectMap1G: 8388608 kB' 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.945 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.945 22:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.946 22:03:46 -- setup/common.sh@33 -- # echo 1024 00:03:49.946 22:03:46 -- setup/common.sh@33 -- # return 0 00:03:49.946 22:03:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.946 22:03:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.946 22:03:46 -- setup/hugepages.sh@27 -- # local node 00:03:49.946 22:03:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.946 22:03:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.946 22:03:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:49.946 22:03:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.946 22:03:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.946 22:03:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.946 22:03:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.946 22:03:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.946 22:03:46 -- setup/common.sh@18 -- # local node=0 00:03:49.946 22:03:46 -- setup/common.sh@19 -- # local var val 00:03:49.946 22:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.946 22:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.946 22:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.946 22:03:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.946 22:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.946 22:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7986376 kB' 'MemUsed: 4252744 kB' 'SwapCached: 0 kB' 'Active: 495024 kB' 'Inactive: 1345276 kB' 'Active(anon): 125836 kB' 'Inactive(anon): 0 kB' 'Active(file): 369188 kB' 'Inactive(file): 1345276 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724948 kB' 'Mapped: 50088 kB' 'AnonPages: 117000 kB' 'Shmem: 10484 kB' 'KernelStack: 6416 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67608 kB' 'Slab: 162932 kB' 'SReclaimable: 67608 kB' 'SUnreclaim: 95324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.946 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.946 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # continue 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.947 22:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.947 22:03:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.947 22:03:46 -- setup/common.sh@33 -- # echo 0 00:03:49.947 22:03:46 -- setup/common.sh@33 -- # return 0 00:03:49.947 22:03:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.947 22:03:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.947 22:03:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.947 22:03:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.947 node0=1024 expecting 1024 00:03:49.947 22:03:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.947 22:03:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.947 00:03:49.947 real 0m1.086s 00:03:49.947 user 0m0.539s 00:03:49.947 sys 0m0.617s 00:03:49.947 22:03:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:49.947 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:03:49.947 ************************************ 00:03:49.947 END TEST no_shrink_alloc 00:03:49.947 ************************************ 00:03:49.947 22:03:46 -- setup/hugepages.sh@217 -- # clear_hp 00:03:49.947 22:03:46 -- setup/hugepages.sh@37 -- # local node hp 00:03:49.947 22:03:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:49.947 22:03:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.947 22:03:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:49.947 22:03:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.947 22:03:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:49.947 22:03:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:49.947 22:03:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:49.947 00:03:49.947 real 0m4.862s 00:03:49.947 user 0m2.352s 00:03:49.947 sys 0m2.648s 00:03:49.947 22:03:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:49.947 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:03:49.948 ************************************ 00:03:49.948 END TEST hugepages 00:03:49.948 ************************************ 00:03:49.948 22:03:46 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:49.948 22:03:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.948 22:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.948 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:03:49.948 ************************************ 00:03:49.948 START TEST driver 00:03:49.948 ************************************ 00:03:49.948 22:03:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:49.948 * Looking for test storage... 00:03:50.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.207 22:03:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:50.207 22:03:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:50.207 22:03:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:50.207 22:03:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:50.207 22:03:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:50.207 22:03:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:50.207 22:03:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:50.207 22:03:46 -- scripts/common.sh@335 -- # IFS=.-: 00:03:50.207 22:03:46 -- scripts/common.sh@335 -- # read -ra ver1 00:03:50.207 22:03:46 -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.207 22:03:46 -- scripts/common.sh@336 -- # read -ra ver2 00:03:50.207 22:03:46 -- scripts/common.sh@337 -- # local 'op=<' 00:03:50.207 22:03:46 -- scripts/common.sh@339 -- # ver1_l=2 00:03:50.207 22:03:46 -- scripts/common.sh@340 -- # ver2_l=1 00:03:50.207 22:03:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:50.207 22:03:46 -- scripts/common.sh@343 -- # case "$op" in 00:03:50.207 22:03:46 -- scripts/common.sh@344 -- # : 1 00:03:50.207 22:03:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:50.207 22:03:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.207 22:03:46 -- scripts/common.sh@364 -- # decimal 1 00:03:50.207 22:03:46 -- scripts/common.sh@352 -- # local d=1 00:03:50.207 22:03:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.207 22:03:46 -- scripts/common.sh@354 -- # echo 1 00:03:50.207 22:03:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:50.207 22:03:46 -- scripts/common.sh@365 -- # decimal 2 00:03:50.207 22:03:46 -- scripts/common.sh@352 -- # local d=2 00:03:50.207 22:03:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.207 22:03:46 -- scripts/common.sh@354 -- # echo 2 00:03:50.207 22:03:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:50.207 22:03:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:50.207 22:03:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:50.207 22:03:46 -- scripts/common.sh@367 -- # return 0 00:03:50.207 22:03:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.207 22:03:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.207 --rc genhtml_branch_coverage=1 00:03:50.207 --rc genhtml_function_coverage=1 00:03:50.207 --rc genhtml_legend=1 00:03:50.207 --rc geninfo_all_blocks=1 00:03:50.207 --rc geninfo_unexecuted_blocks=1 00:03:50.207 00:03:50.207 ' 00:03:50.207 22:03:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.207 --rc genhtml_branch_coverage=1 00:03:50.207 --rc genhtml_function_coverage=1 00:03:50.207 --rc genhtml_legend=1 00:03:50.207 --rc geninfo_all_blocks=1 00:03:50.207 --rc geninfo_unexecuted_blocks=1 00:03:50.207 00:03:50.207 ' 00:03:50.207 22:03:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.207 --rc genhtml_branch_coverage=1 00:03:50.207 --rc genhtml_function_coverage=1 00:03:50.207 --rc genhtml_legend=1 00:03:50.207 --rc geninfo_all_blocks=1 00:03:50.207 --rc geninfo_unexecuted_blocks=1 00:03:50.207 00:03:50.207 ' 00:03:50.207 22:03:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.207 --rc genhtml_branch_coverage=1 00:03:50.207 --rc genhtml_function_coverage=1 00:03:50.207 --rc genhtml_legend=1 00:03:50.207 --rc geninfo_all_blocks=1 00:03:50.207 --rc geninfo_unexecuted_blocks=1 00:03:50.207 00:03:50.207 ' 00:03:50.207 22:03:46 -- setup/driver.sh@68 -- # setup reset 00:03:50.207 22:03:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.207 22:03:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.775 22:03:47 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:50.775 22:03:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.775 22:03:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.775 22:03:47 -- common/autotest_common.sh@10 -- # set +x 00:03:50.775 ************************************ 00:03:50.775 START TEST guess_driver 00:03:50.775 ************************************ 00:03:50.775 22:03:47 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:50.775 22:03:47 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:50.775 22:03:47 -- setup/driver.sh@47 -- # local fail=0 00:03:50.775 22:03:47 -- setup/driver.sh@49 -- # pick_driver 00:03:50.775 22:03:47 -- setup/driver.sh@36 -- # vfio 00:03:50.775 22:03:47 -- setup/driver.sh@21 -- # local iommu_grups 00:03:50.775 22:03:47 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:50.775 22:03:47 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:50.775 22:03:47 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:50.775 22:03:47 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:50.775 22:03:47 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:50.775 22:03:47 -- setup/driver.sh@32 -- # return 1 00:03:50.775 22:03:47 -- setup/driver.sh@38 -- # uio 00:03:50.775 22:03:47 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:50.775 22:03:47 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:50.775 22:03:47 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:50.775 22:03:47 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:50.775 22:03:47 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:50.775 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:50.775 22:03:47 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:50.775 22:03:47 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:50.775 22:03:47 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:50.775 Looking for driver=uio_pci_generic 00:03:50.775 22:03:47 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:50.775 22:03:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.775 22:03:47 -- setup/driver.sh@45 -- # setup output config 00:03:50.775 22:03:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.775 22:03:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.342 22:03:47 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:51.342 22:03:47 -- setup/driver.sh@58 -- # continue 00:03:51.342 22:03:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.601 22:03:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.601 22:03:48 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.601 22:03:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.601 22:03:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:51.601 22:03:48 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:51.601 22:03:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.601 22:03:48 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:51.601 22:03:48 -- setup/driver.sh@65 -- # setup reset 00:03:51.601 22:03:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.601 22:03:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.168 00:03:52.168 real 0m1.513s 00:03:52.168 user 0m0.531s 00:03:52.168 sys 0m0.965s 00:03:52.168 22:03:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:52.168 22:03:48 -- common/autotest_common.sh@10 -- # set +x 00:03:52.168 ************************************ 00:03:52.168 END TEST guess_driver 00:03:52.168 ************************************ 00:03:52.427 00:03:52.427 real 0m2.316s 00:03:52.427 user 0m0.840s 00:03:52.427 sys 0m1.521s 00:03:52.427 22:03:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:52.427 22:03:48 -- common/autotest_common.sh@10 -- # set +x 00:03:52.427 ************************************ 00:03:52.427 END TEST driver 00:03:52.427 ************************************ 00:03:52.427 22:03:48 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:52.427 22:03:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.427 22:03:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.427 22:03:48 -- common/autotest_common.sh@10 -- # set +x 00:03:52.427 ************************************ 00:03:52.427 START TEST devices 00:03:52.427 ************************************ 00:03:52.427 22:03:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:52.427 * Looking for test storage... 00:03:52.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:52.427 22:03:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:52.427 22:03:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:52.427 22:03:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:52.427 22:03:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:52.427 22:03:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:52.427 22:03:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:52.427 22:03:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:52.427 22:03:49 -- scripts/common.sh@335 -- # IFS=.-: 00:03:52.427 22:03:49 -- scripts/common.sh@335 -- # read -ra ver1 00:03:52.427 22:03:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.427 22:03:49 -- scripts/common.sh@336 -- # read -ra ver2 00:03:52.427 22:03:49 -- scripts/common.sh@337 -- # local 'op=<' 00:03:52.427 22:03:49 -- scripts/common.sh@339 -- # ver1_l=2 00:03:52.427 22:03:49 -- scripts/common.sh@340 -- # ver2_l=1 00:03:52.427 22:03:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:52.427 22:03:49 -- scripts/common.sh@343 -- # case "$op" in 00:03:52.427 22:03:49 -- scripts/common.sh@344 -- # : 1 00:03:52.427 22:03:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:52.427 22:03:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.427 22:03:49 -- scripts/common.sh@364 -- # decimal 1 00:03:52.427 22:03:49 -- scripts/common.sh@352 -- # local d=1 00:03:52.427 22:03:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.427 22:03:49 -- scripts/common.sh@354 -- # echo 1 00:03:52.427 22:03:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:52.427 22:03:49 -- scripts/common.sh@365 -- # decimal 2 00:03:52.427 22:03:49 -- scripts/common.sh@352 -- # local d=2 00:03:52.427 22:03:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.427 22:03:49 -- scripts/common.sh@354 -- # echo 2 00:03:52.427 22:03:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:52.427 22:03:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:52.427 22:03:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:52.427 22:03:49 -- scripts/common.sh@367 -- # return 0 00:03:52.427 22:03:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.427 22:03:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:52.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.427 --rc genhtml_branch_coverage=1 00:03:52.427 --rc genhtml_function_coverage=1 00:03:52.427 --rc genhtml_legend=1 00:03:52.427 --rc geninfo_all_blocks=1 00:03:52.427 --rc geninfo_unexecuted_blocks=1 00:03:52.427 00:03:52.427 ' 00:03:52.427 22:03:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:52.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.427 --rc genhtml_branch_coverage=1 00:03:52.427 --rc genhtml_function_coverage=1 00:03:52.427 --rc genhtml_legend=1 00:03:52.427 --rc geninfo_all_blocks=1 00:03:52.427 --rc geninfo_unexecuted_blocks=1 00:03:52.427 00:03:52.427 ' 00:03:52.427 22:03:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:52.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.427 --rc genhtml_branch_coverage=1 00:03:52.427 --rc genhtml_function_coverage=1 00:03:52.427 --rc genhtml_legend=1 00:03:52.427 --rc geninfo_all_blocks=1 00:03:52.427 --rc geninfo_unexecuted_blocks=1 00:03:52.427 00:03:52.427 ' 00:03:52.427 22:03:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:52.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.427 --rc genhtml_branch_coverage=1 00:03:52.427 --rc genhtml_function_coverage=1 00:03:52.427 --rc genhtml_legend=1 00:03:52.427 --rc geninfo_all_blocks=1 00:03:52.427 --rc geninfo_unexecuted_blocks=1 00:03:52.427 00:03:52.427 ' 00:03:52.427 22:03:49 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:52.427 22:03:49 -- setup/devices.sh@192 -- # setup reset 00:03:52.427 22:03:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.427 22:03:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.363 22:03:49 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:53.363 22:03:49 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:53.363 22:03:49 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:53.363 22:03:49 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:53.363 22:03:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.363 22:03:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:53.363 22:03:49 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:53.363 22:03:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.363 22:03:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.363 22:03:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.363 22:03:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:53.363 22:03:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:53.363 22:03:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.363 22:03:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.363 22:03:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.363 22:03:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:53.363 22:03:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:53.363 22:03:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:53.363 22:03:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.363 22:03:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:53.363 22:03:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:53.363 22:03:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:53.363 22:03:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:53.363 22:03:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:53.363 22:03:49 -- setup/devices.sh@196 -- # blocks=() 00:03:53.363 22:03:49 -- setup/devices.sh@196 -- # declare -a blocks 00:03:53.363 22:03:49 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:53.363 22:03:49 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:53.363 22:03:49 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:53.363 22:03:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.363 22:03:49 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:53.363 22:03:49 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:53.363 22:03:49 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:53.363 22:03:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:53.363 22:03:49 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:53.363 22:03:49 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:53.363 22:03:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:53.363 No valid GPT data, bailing 00:03:53.363 22:03:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.363 22:03:49 -- scripts/common.sh@393 -- # pt= 00:03:53.363 22:03:49 -- scripts/common.sh@394 -- # return 1 00:03:53.363 22:03:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:53.363 22:03:49 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:53.363 22:03:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:53.363 22:03:49 -- setup/common.sh@80 -- # echo 5368709120 00:03:53.363 22:03:49 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:53.363 22:03:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.363 22:03:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:53.363 22:03:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.363 22:03:49 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:53.363 22:03:49 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:53.363 22:03:49 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:53.363 22:03:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:53.363 22:03:49 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:53.363 22:03:49 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:53.363 22:03:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:53.363 No valid GPT data, bailing 00:03:53.363 22:03:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:53.363 22:03:49 -- scripts/common.sh@393 -- # pt= 00:03:53.363 22:03:49 -- scripts/common.sh@394 -- # return 1 00:03:53.363 22:03:49 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:53.363 22:03:49 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:53.363 22:03:49 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:53.363 22:03:49 -- setup/common.sh@80 -- # echo 4294967296 00:03:53.363 22:03:49 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:53.363 22:03:49 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.363 22:03:49 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:53.363 22:03:49 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.363 22:03:49 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:53.363 22:03:49 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:53.363 22:03:49 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:53.364 22:03:49 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:53.364 22:03:49 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:53.364 22:03:49 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:53.364 22:03:49 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:53.622 No valid GPT data, bailing 00:03:53.622 22:03:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:53.622 22:03:50 -- scripts/common.sh@393 -- # pt= 00:03:53.622 22:03:50 -- scripts/common.sh@394 -- # return 1 00:03:53.622 22:03:50 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:53.622 22:03:50 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:53.622 22:03:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:53.622 22:03:50 -- setup/common.sh@80 -- # echo 4294967296 00:03:53.622 22:03:50 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:53.622 22:03:50 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.622 22:03:50 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:53.622 22:03:50 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:53.622 22:03:50 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:53.622 22:03:50 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:53.622 22:03:50 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:53.622 22:03:50 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:53.622 22:03:50 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:53.622 22:03:50 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:53.622 22:03:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:53.622 No valid GPT data, bailing 00:03:53.622 22:03:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:53.622 22:03:50 -- scripts/common.sh@393 -- # pt= 00:03:53.622 22:03:50 -- scripts/common.sh@394 -- # return 1 00:03:53.622 22:03:50 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:53.622 22:03:50 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:53.622 22:03:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:53.622 22:03:50 -- setup/common.sh@80 -- # echo 4294967296 00:03:53.622 22:03:50 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:53.622 22:03:50 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:53.622 22:03:50 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:53.622 22:03:50 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:53.622 22:03:50 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:53.622 22:03:50 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:53.622 22:03:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:53.622 22:03:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:53.622 22:03:50 -- common/autotest_common.sh@10 -- # set +x 00:03:53.622 ************************************ 00:03:53.622 START TEST nvme_mount 00:03:53.622 ************************************ 00:03:53.622 22:03:50 -- common/autotest_common.sh@1114 -- # nvme_mount 00:03:53.622 22:03:50 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:53.622 22:03:50 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:53.622 22:03:50 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.622 22:03:50 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.622 22:03:50 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:53.622 22:03:50 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:53.622 22:03:50 -- setup/common.sh@40 -- # local part_no=1 00:03:53.622 22:03:50 -- setup/common.sh@41 -- # local size=1073741824 00:03:53.622 22:03:50 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:53.622 22:03:50 -- setup/common.sh@44 -- # parts=() 00:03:53.622 22:03:50 -- setup/common.sh@44 -- # local parts 00:03:53.623 22:03:50 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:53.623 22:03:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.623 22:03:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:53.623 22:03:50 -- setup/common.sh@46 -- # (( part++ )) 00:03:53.623 22:03:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:53.623 22:03:50 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:53.623 22:03:50 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:53.623 22:03:50 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:54.558 Creating new GPT entries in memory. 00:03:54.558 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:54.558 other utilities. 00:03:54.558 22:03:51 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:54.558 22:03:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.558 22:03:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:54.558 22:03:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:54.558 22:03:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:55.935 Creating new GPT entries in memory. 00:03:55.935 The operation has completed successfully. 00:03:55.935 22:03:52 -- setup/common.sh@57 -- # (( part++ )) 00:03:55.935 22:03:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:55.935 22:03:52 -- setup/common.sh@62 -- # wait 53789 00:03:55.935 22:03:52 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.935 22:03:52 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:55.935 22:03:52 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.935 22:03:52 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:55.935 22:03:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:55.935 22:03:52 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.935 22:03:52 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.935 22:03:52 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:55.935 22:03:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:55.935 22:03:52 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.935 22:03:52 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.935 22:03:52 -- setup/devices.sh@53 -- # local found=0 00:03:55.935 22:03:52 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.935 22:03:52 -- setup/devices.sh@56 -- # : 00:03:55.935 22:03:52 -- setup/devices.sh@59 -- # local pci status 00:03:55.935 22:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.935 22:03:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:55.935 22:03:52 -- setup/devices.sh@47 -- # setup output config 00:03:55.935 22:03:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.935 22:03:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.935 22:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:55.935 22:03:52 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:55.935 22:03:52 -- setup/devices.sh@63 -- # found=1 00:03:55.935 22:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.935 22:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:55.935 22:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.194 22:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:56.194 22:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.452 22:03:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:56.452 22:03:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.452 22:03:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.452 22:03:52 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:56.452 22:03:52 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.452 22:03:52 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.452 22:03:52 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.452 22:03:52 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:56.452 22:03:52 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.452 22:03:52 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.452 22:03:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.452 22:03:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:56.452 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.452 22:03:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.452 22:03:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.711 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:56.711 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:56.711 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.711 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.711 22:03:53 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:56.711 22:03:53 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:56.711 22:03:53 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.711 22:03:53 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.711 22:03:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.711 22:03:53 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.711 22:03:53 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.711 22:03:53 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:56.711 22:03:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.711 22:03:53 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.711 22:03:53 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:56.711 22:03:53 -- setup/devices.sh@53 -- # local found=0 00:03:56.711 22:03:53 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.711 22:03:53 -- setup/devices.sh@56 -- # : 00:03:56.711 22:03:53 -- setup/devices.sh@59 -- # local pci status 00:03:56.711 22:03:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.711 22:03:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:56.711 22:03:53 -- setup/devices.sh@47 -- # setup output config 00:03:56.711 22:03:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.711 22:03:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:56.969 22:03:53 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:56.969 22:03:53 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:56.969 22:03:53 -- setup/devices.sh@63 -- # found=1 00:03:56.969 22:03:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.969 22:03:53 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:56.969 22:03:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.228 22:03:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.228 22:03:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.228 22:03:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.228 22:03:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.487 22:03:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.487 22:03:53 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:57.487 22:03:53 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:57.487 22:03:53 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.487 22:03:53 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:57.487 22:03:53 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:57.487 22:03:53 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:03:57.487 22:03:53 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:57.487 22:03:53 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:57.487 22:03:53 -- setup/devices.sh@50 -- # local mount_point= 00:03:57.487 22:03:53 -- setup/devices.sh@51 -- # local test_file= 00:03:57.487 22:03:53 -- setup/devices.sh@53 -- # local found=0 00:03:57.487 22:03:53 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:57.487 22:03:53 -- setup/devices.sh@59 -- # local pci status 00:03:57.487 22:03:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.487 22:03:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:57.487 22:03:53 -- setup/devices.sh@47 -- # setup output config 00:03:57.487 22:03:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.487 22:03:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:57.746 22:03:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.746 22:03:54 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:57.746 22:03:54 -- setup/devices.sh@63 -- # found=1 00:03:57.746 22:03:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.746 22:03:54 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:57.746 22:03:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.003 22:03:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:58.004 22:03:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.004 22:03:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:58.004 22:03:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.263 22:03:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.263 22:03:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:58.263 22:03:54 -- setup/devices.sh@68 -- # return 0 00:03:58.263 22:03:54 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:58.263 22:03:54 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.263 22:03:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.263 22:03:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.263 22:03:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.263 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.263 00:03:58.263 real 0m4.520s 00:03:58.263 user 0m0.989s 00:03:58.263 sys 0m1.220s 00:03:58.263 22:03:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:58.263 22:03:54 -- common/autotest_common.sh@10 -- # set +x 00:03:58.263 ************************************ 00:03:58.263 END TEST nvme_mount 00:03:58.263 ************************************ 00:03:58.263 22:03:54 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:58.263 22:03:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.263 22:03:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.263 22:03:54 -- common/autotest_common.sh@10 -- # set +x 00:03:58.263 ************************************ 00:03:58.263 START TEST dm_mount 00:03:58.263 ************************************ 00:03:58.263 22:03:54 -- common/autotest_common.sh@1114 -- # dm_mount 00:03:58.263 22:03:54 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:58.263 22:03:54 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:58.263 22:03:54 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:58.263 22:03:54 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:58.263 22:03:54 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:58.263 22:03:54 -- setup/common.sh@40 -- # local part_no=2 00:03:58.263 22:03:54 -- setup/common.sh@41 -- # local size=1073741824 00:03:58.263 22:03:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:58.263 22:03:54 -- setup/common.sh@44 -- # parts=() 00:03:58.263 22:03:54 -- setup/common.sh@44 -- # local parts 00:03:58.263 22:03:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:58.263 22:03:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.263 22:03:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.263 22:03:54 -- setup/common.sh@46 -- # (( part++ )) 00:03:58.263 22:03:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.263 22:03:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.263 22:03:54 -- setup/common.sh@46 -- # (( part++ )) 00:03:58.263 22:03:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.263 22:03:54 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:58.263 22:03:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:58.263 22:03:54 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:59.200 Creating new GPT entries in memory. 00:03:59.200 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:59.200 other utilities. 00:03:59.200 22:03:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:59.200 22:03:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.200 22:03:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:59.200 22:03:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:59.200 22:03:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:00.577 Creating new GPT entries in memory. 00:04:00.577 The operation has completed successfully. 00:04:00.577 22:03:56 -- setup/common.sh@57 -- # (( part++ )) 00:04:00.577 22:03:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.577 22:03:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.577 22:03:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.577 22:03:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:01.514 The operation has completed successfully. 00:04:01.514 22:03:57 -- setup/common.sh@57 -- # (( part++ )) 00:04:01.514 22:03:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.514 22:03:57 -- setup/common.sh@62 -- # wait 54243 00:04:01.514 22:03:57 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:01.514 22:03:57 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.514 22:03:57 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:01.514 22:03:57 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:01.514 22:03:57 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:01.514 22:03:57 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:01.514 22:03:57 -- setup/devices.sh@161 -- # break 00:04:01.514 22:03:57 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:01.514 22:03:57 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:01.514 22:03:57 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:01.514 22:03:57 -- setup/devices.sh@166 -- # dm=dm-0 00:04:01.514 22:03:57 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:01.514 22:03:57 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:01.514 22:03:57 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.514 22:03:57 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:01.514 22:03:57 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.514 22:03:57 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:01.514 22:03:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:01.514 22:03:57 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.514 22:03:57 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:01.514 22:03:57 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:01.514 22:03:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:01.514 22:03:57 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:01.514 22:03:57 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:01.514 22:03:57 -- setup/devices.sh@53 -- # local found=0 00:04:01.514 22:03:57 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:01.514 22:03:57 -- setup/devices.sh@56 -- # : 00:04:01.514 22:03:57 -- setup/devices.sh@59 -- # local pci status 00:04:01.514 22:03:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.514 22:03:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:01.514 22:03:57 -- setup/devices.sh@47 -- # setup output config 00:04:01.514 22:03:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.514 22:03:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.514 22:03:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:01.514 22:03:58 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:01.514 22:03:58 -- setup/devices.sh@63 -- # found=1 00:04:01.514 22:03:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.514 22:03:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:01.514 22:03:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.082 22:03:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.082 22:03:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.082 22:03:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.082 22:03:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.082 22:03:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.082 22:03:58 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:02.082 22:03:58 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.082 22:03:58 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.082 22:03:58 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:02.082 22:03:58 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.082 22:03:58 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:02.082 22:03:58 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:02.082 22:03:58 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:02.082 22:03:58 -- setup/devices.sh@50 -- # local mount_point= 00:04:02.082 22:03:58 -- setup/devices.sh@51 -- # local test_file= 00:04:02.082 22:03:58 -- setup/devices.sh@53 -- # local found=0 00:04:02.082 22:03:58 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:02.082 22:03:58 -- setup/devices.sh@59 -- # local pci status 00:04:02.082 22:03:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.082 22:03:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:02.082 22:03:58 -- setup/devices.sh@47 -- # setup output config 00:04:02.082 22:03:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.082 22:03:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.340 22:03:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.340 22:03:58 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:02.340 22:03:58 -- setup/devices.sh@63 -- # found=1 00:04:02.340 22:03:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.340 22:03:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.340 22:03:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.614 22:03:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.614 22:03:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.614 22:03:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:02.614 22:03:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.900 22:03:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.900 22:03:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:02.900 22:03:59 -- setup/devices.sh@68 -- # return 0 00:04:02.900 22:03:59 -- setup/devices.sh@187 -- # cleanup_dm 00:04:02.900 22:03:59 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:02.900 22:03:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:02.900 22:03:59 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:02.900 22:03:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.900 22:03:59 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:02.900 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:02.900 22:03:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:02.900 22:03:59 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:02.900 00:04:02.900 real 0m4.614s 00:04:02.900 user 0m0.679s 00:04:02.900 sys 0m0.864s 00:04:02.900 22:03:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.900 ************************************ 00:04:02.900 END TEST dm_mount 00:04:02.900 ************************************ 00:04:02.900 22:03:59 -- common/autotest_common.sh@10 -- # set +x 00:04:02.900 22:03:59 -- setup/devices.sh@1 -- # cleanup 00:04:02.900 22:03:59 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:02.900 22:03:59 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.900 22:03:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.900 22:03:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:02.900 22:03:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:02.900 22:03:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.159 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:03.159 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:03.159 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:03.159 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:03.159 22:03:59 -- setup/devices.sh@12 -- # cleanup_dm 00:04:03.159 22:03:59 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:03.159 22:03:59 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:03.159 22:03:59 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.159 22:03:59 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:03.159 22:03:59 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.159 22:03:59 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:03.159 00:04:03.159 real 0m10.804s 00:04:03.159 user 0m2.403s 00:04:03.159 sys 0m2.732s 00:04:03.159 22:03:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.159 22:03:59 -- common/autotest_common.sh@10 -- # set +x 00:04:03.159 ************************************ 00:04:03.159 END TEST devices 00:04:03.159 ************************************ 00:04:03.159 ************************************ 00:04:03.159 END TEST setup.sh 00:04:03.159 ************************************ 00:04:03.159 00:04:03.159 real 0m22.890s 00:04:03.159 user 0m7.715s 00:04:03.159 sys 0m9.633s 00:04:03.159 22:03:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.159 22:03:59 -- common/autotest_common.sh@10 -- # set +x 00:04:03.159 22:03:59 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.419 Hugepages 00:04:03.419 node hugesize free / total 00:04:03.419 node0 1048576kB 0 / 0 00:04:03.419 node0 2048kB 2048 / 2048 00:04:03.419 00:04:03.419 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.419 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:03.678 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:03.678 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:03.678 22:04:00 -- spdk/autotest.sh@128 -- # uname -s 00:04:03.678 22:04:00 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:03.678 22:04:00 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:03.678 22:04:00 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.246 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.505 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.505 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.505 22:04:01 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:05.442 22:04:02 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:05.442 22:04:02 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:05.442 22:04:02 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:05.442 22:04:02 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:05.442 22:04:02 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:05.442 22:04:02 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:05.442 22:04:02 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.442 22:04:02 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.442 22:04:02 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:05.700 22:04:02 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:05.700 22:04:02 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:05.700 22:04:02 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.959 Waiting for block devices as requested 00:04:05.959 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.218 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.218 22:04:02 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:06.218 22:04:02 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:06.218 22:04:02 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.218 22:04:02 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:06.218 22:04:02 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:06.218 22:04:02 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:06.218 22:04:02 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:06.218 22:04:02 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:06.218 22:04:02 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:06.218 22:04:02 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:06.218 22:04:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:06.218 22:04:02 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:06.218 22:04:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.218 22:04:02 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:06.218 22:04:02 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:06.218 22:04:02 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:06.218 22:04:02 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:06.218 22:04:02 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:06.218 22:04:02 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:06.218 22:04:02 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:06.218 22:04:02 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:06.218 22:04:02 -- common/autotest_common.sh@1552 -- # continue 00:04:06.218 22:04:02 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:06.218 22:04:02 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:06.218 22:04:02 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.218 22:04:02 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:06.218 22:04:02 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:06.218 22:04:02 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:06.218 22:04:02 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:06.218 22:04:02 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:06.218 22:04:02 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:06.218 22:04:02 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:06.218 22:04:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:06.218 22:04:02 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:06.218 22:04:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.218 22:04:02 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:06.218 22:04:02 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:06.218 22:04:02 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:06.218 22:04:02 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:06.218 22:04:02 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:06.218 22:04:02 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:06.218 22:04:02 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:06.218 22:04:02 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:06.218 22:04:02 -- common/autotest_common.sh@1552 -- # continue 00:04:06.218 22:04:02 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:06.218 22:04:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:06.218 22:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:06.218 22:04:02 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:06.218 22:04:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:06.218 22:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:06.218 22:04:02 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.155 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.155 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.155 22:04:03 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:07.155 22:04:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:07.155 22:04:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.155 22:04:03 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:07.155 22:04:03 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:07.155 22:04:03 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:07.155 22:04:03 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:07.155 22:04:03 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:07.155 22:04:03 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:07.155 22:04:03 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:07.155 22:04:03 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:07.155 22:04:03 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:07.155 22:04:03 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:07.155 22:04:03 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:07.415 22:04:03 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:07.415 22:04:03 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:07.415 22:04:03 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:07.415 22:04:03 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:07.415 22:04:03 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:07.415 22:04:03 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.415 22:04:03 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:07.415 22:04:03 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:07.415 22:04:03 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:07.415 22:04:03 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.415 22:04:03 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:07.415 22:04:03 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:07.415 22:04:03 -- common/autotest_common.sh@1588 -- # return 0 00:04:07.415 22:04:03 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:07.415 22:04:03 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:07.415 22:04:03 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:07.415 22:04:03 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:07.415 22:04:03 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:07.415 22:04:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:07.415 22:04:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.415 22:04:03 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:07.415 22:04:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.415 22:04:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.415 22:04:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.415 ************************************ 00:04:07.415 START TEST env 00:04:07.415 ************************************ 00:04:07.415 22:04:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:07.415 * Looking for test storage... 00:04:07.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:07.415 22:04:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:07.415 22:04:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:07.415 22:04:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:07.415 22:04:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:07.415 22:04:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:07.415 22:04:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:07.415 22:04:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:07.415 22:04:03 -- scripts/common.sh@335 -- # IFS=.-: 00:04:07.415 22:04:03 -- scripts/common.sh@335 -- # read -ra ver1 00:04:07.415 22:04:03 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.415 22:04:03 -- scripts/common.sh@336 -- # read -ra ver2 00:04:07.415 22:04:03 -- scripts/common.sh@337 -- # local 'op=<' 00:04:07.415 22:04:03 -- scripts/common.sh@339 -- # ver1_l=2 00:04:07.415 22:04:03 -- scripts/common.sh@340 -- # ver2_l=1 00:04:07.415 22:04:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:07.415 22:04:03 -- scripts/common.sh@343 -- # case "$op" in 00:04:07.415 22:04:03 -- scripts/common.sh@344 -- # : 1 00:04:07.415 22:04:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:07.415 22:04:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.415 22:04:03 -- scripts/common.sh@364 -- # decimal 1 00:04:07.415 22:04:03 -- scripts/common.sh@352 -- # local d=1 00:04:07.415 22:04:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.415 22:04:03 -- scripts/common.sh@354 -- # echo 1 00:04:07.415 22:04:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:07.415 22:04:03 -- scripts/common.sh@365 -- # decimal 2 00:04:07.415 22:04:03 -- scripts/common.sh@352 -- # local d=2 00:04:07.415 22:04:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.415 22:04:03 -- scripts/common.sh@354 -- # echo 2 00:04:07.415 22:04:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:07.415 22:04:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:07.415 22:04:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:07.415 22:04:03 -- scripts/common.sh@367 -- # return 0 00:04:07.415 22:04:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.415 22:04:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:07.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.415 --rc genhtml_branch_coverage=1 00:04:07.415 --rc genhtml_function_coverage=1 00:04:07.415 --rc genhtml_legend=1 00:04:07.415 --rc geninfo_all_blocks=1 00:04:07.415 --rc geninfo_unexecuted_blocks=1 00:04:07.415 00:04:07.415 ' 00:04:07.415 22:04:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:07.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.415 --rc genhtml_branch_coverage=1 00:04:07.415 --rc genhtml_function_coverage=1 00:04:07.415 --rc genhtml_legend=1 00:04:07.415 --rc geninfo_all_blocks=1 00:04:07.415 --rc geninfo_unexecuted_blocks=1 00:04:07.415 00:04:07.415 ' 00:04:07.415 22:04:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:07.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.415 --rc genhtml_branch_coverage=1 00:04:07.415 --rc genhtml_function_coverage=1 00:04:07.415 --rc genhtml_legend=1 00:04:07.415 --rc geninfo_all_blocks=1 00:04:07.415 --rc geninfo_unexecuted_blocks=1 00:04:07.415 00:04:07.415 ' 00:04:07.415 22:04:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:07.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.415 --rc genhtml_branch_coverage=1 00:04:07.415 --rc genhtml_function_coverage=1 00:04:07.415 --rc genhtml_legend=1 00:04:07.415 --rc geninfo_all_blocks=1 00:04:07.415 --rc geninfo_unexecuted_blocks=1 00:04:07.415 00:04:07.415 ' 00:04:07.415 22:04:03 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:07.415 22:04:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.415 22:04:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.415 22:04:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.415 ************************************ 00:04:07.415 START TEST env_memory 00:04:07.415 ************************************ 00:04:07.415 22:04:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:07.415 00:04:07.415 00:04:07.415 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.415 http://cunit.sourceforge.net/ 00:04:07.415 00:04:07.415 00:04:07.415 Suite: memory 00:04:07.675 Test: alloc and free memory map ...[2024-11-17 22:04:04.059609] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.675 passed 00:04:07.675 Test: mem map translation ...[2024-11-17 22:04:04.099343] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.675 [2024-11-17 22:04:04.099533] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.675 [2024-11-17 22:04:04.099728] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.675 [2024-11-17 22:04:04.099946] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.675 passed 00:04:07.675 Test: mem map registration ...[2024-11-17 22:04:04.163952] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:07.675 [2024-11-17 22:04:04.164125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:07.675 passed 00:04:07.675 Test: mem map adjacent registrations ...passed 00:04:07.675 00:04:07.675 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.675 suites 1 1 n/a 0 0 00:04:07.675 tests 4 4 4 0 0 00:04:07.675 asserts 152 152 152 0 n/a 00:04:07.675 00:04:07.675 Elapsed time = 0.217 seconds 00:04:07.675 00:04:07.675 real 0m0.248s 00:04:07.675 user 0m0.218s 00:04:07.675 sys 0m0.014s 00:04:07.675 22:04:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.675 ************************************ 00:04:07.675 END TEST env_memory 00:04:07.675 ************************************ 00:04:07.675 22:04:04 -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 22:04:04 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.935 22:04:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.935 22:04:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.935 22:04:04 -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 ************************************ 00:04:07.935 START TEST env_vtophys 00:04:07.935 ************************************ 00:04:07.935 22:04:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.935 EAL: lib.eal log level changed from notice to debug 00:04:07.935 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.935 EAL: Detected lcore 1 as core 0 on socket 0 00:04:07.935 EAL: Detected lcore 2 as core 0 on socket 0 00:04:07.935 EAL: Detected lcore 3 as core 0 on socket 0 00:04:07.935 EAL: Detected lcore 4 as core 0 on socket 0 00:04:07.935 EAL: Detected lcore 5 as core 0 on socket 0 00:04:07.935 EAL: Detected lcore 6 as core 0 on socket 0 00:04:07.935 EAL: Detected lcore 7 as core 0 on socket 0 00:04:07.935 EAL: Detected lcore 8 as core 0 on socket 0 00:04:07.935 EAL: Detected lcore 9 as core 0 on socket 0 00:04:07.935 EAL: Maximum logical cores by configuration: 128 00:04:07.935 EAL: Detected CPU lcores: 10 00:04:07.935 EAL: Detected NUMA nodes: 1 00:04:07.935 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:07.935 EAL: Detected shared linkage of DPDK 00:04:07.935 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.935 EAL: Selected IOVA mode 'PA' 00:04:07.935 EAL: Probing VFIO support... 00:04:07.935 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.935 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:07.935 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.935 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.935 EAL: Setting up physically contiguous memory... 00:04:07.935 EAL: Setting maximum number of open files to 524288 00:04:07.935 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.935 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.935 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.935 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.935 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.935 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.935 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.935 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.935 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.935 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.935 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.935 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.935 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.935 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.935 EAL: Hugepages will be freed exactly as allocated. 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: TSC frequency is ~2200000 KHz 00:04:07.935 EAL: Main lcore 0 is ready (tid=7f6275c0fa00;cpuset=[0]) 00:04:07.935 EAL: Trying to obtain current memory policy. 00:04:07.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.935 EAL: Restoring previous memory policy: 0 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.935 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.935 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.935 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.935 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:07.935 00:04:07.935 00:04:07.935 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.935 http://cunit.sourceforge.net/ 00:04:07.935 00:04:07.935 00:04:07.935 Suite: components_suite 00:04:07.935 Test: vtophys_malloc_test ...passed 00:04:07.935 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.935 EAL: Restoring previous memory policy: 4 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.935 EAL: Trying to obtain current memory policy. 00:04:07.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.935 EAL: Restoring previous memory policy: 4 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.935 EAL: Trying to obtain current memory policy. 00:04:07.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.935 EAL: Restoring previous memory policy: 4 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.935 EAL: Trying to obtain current memory policy. 00:04:07.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.935 EAL: Restoring previous memory policy: 4 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.935 EAL: Trying to obtain current memory policy. 00:04:07.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.935 EAL: Restoring previous memory policy: 4 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.935 EAL: Trying to obtain current memory policy. 00:04:07.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.935 EAL: Restoring previous memory policy: 4 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.935 EAL: request: mp_malloc_sync 00:04:07.935 EAL: No shared files mode enabled, IPC is disabled 00:04:07.935 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.935 EAL: Trying to obtain current memory policy. 00:04:07.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.195 EAL: Restoring previous memory policy: 4 00:04:08.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.195 EAL: request: mp_malloc_sync 00:04:08.195 EAL: No shared files mode enabled, IPC is disabled 00:04:08.195 EAL: Heap on socket 0 was expanded by 130MB 00:04:08.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.195 EAL: request: mp_malloc_sync 00:04:08.195 EAL: No shared files mode enabled, IPC is disabled 00:04:08.195 EAL: Heap on socket 0 was shrunk by 130MB 00:04:08.195 EAL: Trying to obtain current memory policy. 00:04:08.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.195 EAL: Restoring previous memory policy: 4 00:04:08.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.195 EAL: request: mp_malloc_sync 00:04:08.195 EAL: No shared files mode enabled, IPC is disabled 00:04:08.195 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.195 EAL: request: mp_malloc_sync 00:04:08.195 EAL: No shared files mode enabled, IPC is disabled 00:04:08.195 EAL: Heap on socket 0 was shrunk by 258MB 00:04:08.195 EAL: Trying to obtain current memory policy. 00:04:08.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.455 EAL: Restoring previous memory policy: 4 00:04:08.455 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.455 EAL: request: mp_malloc_sync 00:04:08.455 EAL: No shared files mode enabled, IPC is disabled 00:04:08.455 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.455 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.714 EAL: request: mp_malloc_sync 00:04:08.714 EAL: No shared files mode enabled, IPC is disabled 00:04:08.714 EAL: Heap on socket 0 was shrunk by 514MB 00:04:08.714 EAL: Trying to obtain current memory policy. 00:04:08.714 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.714 EAL: Restoring previous memory policy: 4 00:04:08.714 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.714 EAL: request: mp_malloc_sync 00:04:08.714 EAL: No shared files mode enabled, IPC is disabled 00:04:08.714 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.233 passed 00:04:09.233 00:04:09.233 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.233 suites 1 1 n/a 0 0 00:04:09.233 tests 2 2 2 0 0 00:04:09.233 asserts 5183 5183 5183 0 n/a 00:04:09.233 00:04:09.233 Elapsed time = 1.191 seconds 00:04:09.233 EAL: request: mp_malloc_sync 00:04:09.233 EAL: No shared files mode enabled, IPC is disabled 00:04:09.233 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:09.233 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.233 EAL: request: mp_malloc_sync 00:04:09.233 EAL: No shared files mode enabled, IPC is disabled 00:04:09.233 EAL: Heap on socket 0 was shrunk by 2MB 00:04:09.233 EAL: No shared files mode enabled, IPC is disabled 00:04:09.233 EAL: No shared files mode enabled, IPC is disabled 00:04:09.233 EAL: No shared files mode enabled, IPC is disabled 00:04:09.233 ************************************ 00:04:09.233 END TEST env_vtophys 00:04:09.233 ************************************ 00:04:09.233 00:04:09.233 real 0m1.396s 00:04:09.233 user 0m0.762s 00:04:09.233 sys 0m0.495s 00:04:09.233 22:04:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.233 22:04:05 -- common/autotest_common.sh@10 -- # set +x 00:04:09.233 22:04:05 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.233 22:04:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.233 22:04:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.233 22:04:05 -- common/autotest_common.sh@10 -- # set +x 00:04:09.233 ************************************ 00:04:09.233 START TEST env_pci 00:04:09.233 ************************************ 00:04:09.233 22:04:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.233 00:04:09.233 00:04:09.233 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.233 http://cunit.sourceforge.net/ 00:04:09.233 00:04:09.233 00:04:09.233 Suite: pci 00:04:09.233 Test: pci_hook ...[2024-11-17 22:04:05.772632] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55387 has claimed it 00:04:09.233 passed 00:04:09.233 00:04:09.233 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.233 suites 1 1 n/a 0 0 00:04:09.233 tests 1 1 1 0 0 00:04:09.233 asserts 25 25 25 0 n/a 00:04:09.233 00:04:09.233 Elapsed time = 0.002 seconds 00:04:09.233 EAL: Cannot find device (10000:00:01.0) 00:04:09.233 EAL: Failed to attach device on primary process 00:04:09.233 ************************************ 00:04:09.233 END TEST env_pci 00:04:09.233 ************************************ 00:04:09.233 00:04:09.233 real 0m0.023s 00:04:09.233 user 0m0.012s 00:04:09.233 sys 0m0.011s 00:04:09.233 22:04:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.233 22:04:05 -- common/autotest_common.sh@10 -- # set +x 00:04:09.233 22:04:05 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:09.233 22:04:05 -- env/env.sh@15 -- # uname 00:04:09.233 22:04:05 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:09.233 22:04:05 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:09.233 22:04:05 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.233 22:04:05 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:09.233 22:04:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.233 22:04:05 -- common/autotest_common.sh@10 -- # set +x 00:04:09.233 ************************************ 00:04:09.233 START TEST env_dpdk_post_init 00:04:09.233 ************************************ 00:04:09.233 22:04:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.493 EAL: Detected CPU lcores: 10 00:04:09.493 EAL: Detected NUMA nodes: 1 00:04:09.493 EAL: Detected shared linkage of DPDK 00:04:09.493 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.493 EAL: Selected IOVA mode 'PA' 00:04:09.493 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:09.493 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:09.493 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:09.493 Starting DPDK initialization... 00:04:09.493 Starting SPDK post initialization... 00:04:09.493 SPDK NVMe probe 00:04:09.493 Attaching to 0000:00:06.0 00:04:09.493 Attaching to 0000:00:07.0 00:04:09.493 Attached to 0000:00:06.0 00:04:09.493 Attached to 0000:00:07.0 00:04:09.493 Cleaning up... 00:04:09.493 ************************************ 00:04:09.493 END TEST env_dpdk_post_init 00:04:09.493 ************************************ 00:04:09.493 00:04:09.493 real 0m0.176s 00:04:09.493 user 0m0.039s 00:04:09.493 sys 0m0.038s 00:04:09.493 22:04:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.493 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.493 22:04:06 -- env/env.sh@26 -- # uname 00:04:09.493 22:04:06 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:09.493 22:04:06 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:09.493 22:04:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.493 22:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.493 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.493 ************************************ 00:04:09.493 START TEST env_mem_callbacks 00:04:09.493 ************************************ 00:04:09.493 22:04:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:09.493 EAL: Detected CPU lcores: 10 00:04:09.493 EAL: Detected NUMA nodes: 1 00:04:09.493 EAL: Detected shared linkage of DPDK 00:04:09.493 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.752 EAL: Selected IOVA mode 'PA' 00:04:09.752 00:04:09.752 00:04:09.752 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.752 http://cunit.sourceforge.net/ 00:04:09.752 00:04:09.752 00:04:09.752 Suite: memory 00:04:09.752 Test: test ... 00:04:09.752 register 0x200000200000 2097152 00:04:09.752 malloc 3145728 00:04:09.752 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:09.752 register 0x200000400000 4194304 00:04:09.752 buf 0x200000500000 len 3145728 PASSED 00:04:09.752 malloc 64 00:04:09.752 buf 0x2000004fff40 len 64 PASSED 00:04:09.752 malloc 4194304 00:04:09.752 register 0x200000800000 6291456 00:04:09.752 buf 0x200000a00000 len 4194304 PASSED 00:04:09.752 free 0x200000500000 3145728 00:04:09.752 free 0x2000004fff40 64 00:04:09.752 unregister 0x200000400000 4194304 PASSED 00:04:09.752 free 0x200000a00000 4194304 00:04:09.752 unregister 0x200000800000 6291456 PASSED 00:04:09.752 malloc 8388608 00:04:09.752 register 0x200000400000 10485760 00:04:09.752 buf 0x200000600000 len 8388608 PASSED 00:04:09.752 free 0x200000600000 8388608 00:04:09.752 unregister 0x200000400000 10485760 PASSED 00:04:09.752 passed 00:04:09.752 00:04:09.752 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.752 suites 1 1 n/a 0 0 00:04:09.752 tests 1 1 1 0 0 00:04:09.752 asserts 15 15 15 0 n/a 00:04:09.752 00:04:09.752 Elapsed time = 0.008 seconds 00:04:09.752 00:04:09.752 real 0m0.148s 00:04:09.752 user 0m0.018s 00:04:09.752 sys 0m0.027s 00:04:09.752 22:04:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.752 ************************************ 00:04:09.752 END TEST env_mem_callbacks 00:04:09.752 ************************************ 00:04:09.752 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.752 ************************************ 00:04:09.752 END TEST env 00:04:09.752 ************************************ 00:04:09.752 00:04:09.752 real 0m2.462s 00:04:09.752 user 0m1.254s 00:04:09.752 sys 0m0.839s 00:04:09.752 22:04:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.752 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.752 22:04:06 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:09.752 22:04:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.752 22:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.752 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.752 ************************************ 00:04:09.752 START TEST rpc 00:04:09.752 ************************************ 00:04:09.752 22:04:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:10.012 * Looking for test storage... 00:04:10.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.012 22:04:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:10.012 22:04:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:10.012 22:04:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:10.012 22:04:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:10.012 22:04:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:10.012 22:04:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:10.012 22:04:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:10.012 22:04:06 -- scripts/common.sh@335 -- # IFS=.-: 00:04:10.012 22:04:06 -- scripts/common.sh@335 -- # read -ra ver1 00:04:10.012 22:04:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.012 22:04:06 -- scripts/common.sh@336 -- # read -ra ver2 00:04:10.012 22:04:06 -- scripts/common.sh@337 -- # local 'op=<' 00:04:10.012 22:04:06 -- scripts/common.sh@339 -- # ver1_l=2 00:04:10.012 22:04:06 -- scripts/common.sh@340 -- # ver2_l=1 00:04:10.012 22:04:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:10.012 22:04:06 -- scripts/common.sh@343 -- # case "$op" in 00:04:10.012 22:04:06 -- scripts/common.sh@344 -- # : 1 00:04:10.012 22:04:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:10.012 22:04:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.012 22:04:06 -- scripts/common.sh@364 -- # decimal 1 00:04:10.012 22:04:06 -- scripts/common.sh@352 -- # local d=1 00:04:10.012 22:04:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.012 22:04:06 -- scripts/common.sh@354 -- # echo 1 00:04:10.012 22:04:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:10.012 22:04:06 -- scripts/common.sh@365 -- # decimal 2 00:04:10.012 22:04:06 -- scripts/common.sh@352 -- # local d=2 00:04:10.012 22:04:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.012 22:04:06 -- scripts/common.sh@354 -- # echo 2 00:04:10.012 22:04:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:10.012 22:04:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:10.012 22:04:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:10.013 22:04:06 -- scripts/common.sh@367 -- # return 0 00:04:10.013 22:04:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.013 22:04:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:10.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.013 --rc genhtml_branch_coverage=1 00:04:10.013 --rc genhtml_function_coverage=1 00:04:10.013 --rc genhtml_legend=1 00:04:10.013 --rc geninfo_all_blocks=1 00:04:10.013 --rc geninfo_unexecuted_blocks=1 00:04:10.013 00:04:10.013 ' 00:04:10.013 22:04:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:10.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.013 --rc genhtml_branch_coverage=1 00:04:10.013 --rc genhtml_function_coverage=1 00:04:10.013 --rc genhtml_legend=1 00:04:10.013 --rc geninfo_all_blocks=1 00:04:10.013 --rc geninfo_unexecuted_blocks=1 00:04:10.013 00:04:10.013 ' 00:04:10.013 22:04:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:10.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.013 --rc genhtml_branch_coverage=1 00:04:10.013 --rc genhtml_function_coverage=1 00:04:10.013 --rc genhtml_legend=1 00:04:10.013 --rc geninfo_all_blocks=1 00:04:10.013 --rc geninfo_unexecuted_blocks=1 00:04:10.013 00:04:10.013 ' 00:04:10.013 22:04:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:10.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.013 --rc genhtml_branch_coverage=1 00:04:10.013 --rc genhtml_function_coverage=1 00:04:10.013 --rc genhtml_legend=1 00:04:10.013 --rc geninfo_all_blocks=1 00:04:10.013 --rc geninfo_unexecuted_blocks=1 00:04:10.013 00:04:10.013 ' 00:04:10.013 22:04:06 -- rpc/rpc.sh@65 -- # spdk_pid=55509 00:04:10.013 22:04:06 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:10.013 22:04:06 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.013 22:04:06 -- rpc/rpc.sh@67 -- # waitforlisten 55509 00:04:10.013 22:04:06 -- common/autotest_common.sh@829 -- # '[' -z 55509 ']' 00:04:10.013 22:04:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.013 22:04:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:10.013 22:04:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.013 22:04:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:10.013 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:04:10.013 [2024-11-17 22:04:06.553181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:10.013 [2024-11-17 22:04:06.553416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55509 ] 00:04:10.272 [2024-11-17 22:04:06.688351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.272 [2024-11-17 22:04:06.755581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:10.272 [2024-11-17 22:04:06.756030] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:10.272 [2024-11-17 22:04:06.756184] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55509' to capture a snapshot of events at runtime. 00:04:10.272 [2024-11-17 22:04:06.756363] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55509 for offline analysis/debug. 00:04:10.272 [2024-11-17 22:04:06.756520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.209 22:04:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.209 22:04:07 -- common/autotest_common.sh@862 -- # return 0 00:04:11.209 22:04:07 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.209 22:04:07 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.209 22:04:07 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:11.209 22:04:07 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:11.209 22:04:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.209 22:04:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.209 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.209 ************************************ 00:04:11.209 START TEST rpc_integrity 00:04:11.209 ************************************ 00:04:11.209 22:04:07 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:11.209 22:04:07 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.209 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.209 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.210 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.210 22:04:07 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.210 22:04:07 -- rpc/rpc.sh@13 -- # jq length 00:04:11.210 22:04:07 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.210 22:04:07 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.210 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.210 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.210 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.210 22:04:07 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:11.210 22:04:07 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.210 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.210 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.210 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.210 22:04:07 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.210 { 00:04:11.210 "aliases": [ 00:04:11.210 "4e899261-83e7-48a5-a6b1-64f8b492a1bc" 00:04:11.210 ], 00:04:11.210 "assigned_rate_limits": { 00:04:11.210 "r_mbytes_per_sec": 0, 00:04:11.210 "rw_ios_per_sec": 0, 00:04:11.210 "rw_mbytes_per_sec": 0, 00:04:11.210 "w_mbytes_per_sec": 0 00:04:11.210 }, 00:04:11.210 "block_size": 512, 00:04:11.210 "claimed": false, 00:04:11.210 "driver_specific": {}, 00:04:11.210 "memory_domains": [ 00:04:11.210 { 00:04:11.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.210 "dma_device_type": 2 00:04:11.210 } 00:04:11.210 ], 00:04:11.210 "name": "Malloc0", 00:04:11.210 "num_blocks": 16384, 00:04:11.210 "product_name": "Malloc disk", 00:04:11.210 "supported_io_types": { 00:04:11.210 "abort": true, 00:04:11.210 "compare": false, 00:04:11.210 "compare_and_write": false, 00:04:11.210 "flush": true, 00:04:11.210 "nvme_admin": false, 00:04:11.210 "nvme_io": false, 00:04:11.210 "read": true, 00:04:11.210 "reset": true, 00:04:11.210 "unmap": true, 00:04:11.210 "write": true, 00:04:11.210 "write_zeroes": true 00:04:11.210 }, 00:04:11.210 "uuid": "4e899261-83e7-48a5-a6b1-64f8b492a1bc", 00:04:11.210 "zoned": false 00:04:11.210 } 00:04:11.210 ]' 00:04:11.210 22:04:07 -- rpc/rpc.sh@17 -- # jq length 00:04:11.210 22:04:07 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.210 22:04:07 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:11.210 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.210 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.210 [2024-11-17 22:04:07.722622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:11.210 [2024-11-17 22:04:07.722664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.210 [2024-11-17 22:04:07.722686] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2241880 00:04:11.210 [2024-11-17 22:04:07.722696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.210 [2024-11-17 22:04:07.724177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.210 [2024-11-17 22:04:07.724227] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.210 Passthru0 00:04:11.210 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.210 22:04:07 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.210 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.210 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.210 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.210 22:04:07 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.210 { 00:04:11.210 "aliases": [ 00:04:11.210 "4e899261-83e7-48a5-a6b1-64f8b492a1bc" 00:04:11.210 ], 00:04:11.210 "assigned_rate_limits": { 00:04:11.210 "r_mbytes_per_sec": 0, 00:04:11.210 "rw_ios_per_sec": 0, 00:04:11.210 "rw_mbytes_per_sec": 0, 00:04:11.210 "w_mbytes_per_sec": 0 00:04:11.210 }, 00:04:11.210 "block_size": 512, 00:04:11.210 "claim_type": "exclusive_write", 00:04:11.210 "claimed": true, 00:04:11.210 "driver_specific": {}, 00:04:11.210 "memory_domains": [ 00:04:11.210 { 00:04:11.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.210 "dma_device_type": 2 00:04:11.210 } 00:04:11.210 ], 00:04:11.210 "name": "Malloc0", 00:04:11.210 "num_blocks": 16384, 00:04:11.210 "product_name": "Malloc disk", 00:04:11.210 "supported_io_types": { 00:04:11.210 "abort": true, 00:04:11.210 "compare": false, 00:04:11.210 "compare_and_write": false, 00:04:11.210 "flush": true, 00:04:11.210 "nvme_admin": false, 00:04:11.210 "nvme_io": false, 00:04:11.210 "read": true, 00:04:11.210 "reset": true, 00:04:11.210 "unmap": true, 00:04:11.210 "write": true, 00:04:11.210 "write_zeroes": true 00:04:11.210 }, 00:04:11.210 "uuid": "4e899261-83e7-48a5-a6b1-64f8b492a1bc", 00:04:11.210 "zoned": false 00:04:11.210 }, 00:04:11.210 { 00:04:11.210 "aliases": [ 00:04:11.210 "033f1c62-0d1b-55dc-9bb4-1999de2089c0" 00:04:11.210 ], 00:04:11.210 "assigned_rate_limits": { 00:04:11.210 "r_mbytes_per_sec": 0, 00:04:11.210 "rw_ios_per_sec": 0, 00:04:11.210 "rw_mbytes_per_sec": 0, 00:04:11.210 "w_mbytes_per_sec": 0 00:04:11.210 }, 00:04:11.210 "block_size": 512, 00:04:11.210 "claimed": false, 00:04:11.210 "driver_specific": { 00:04:11.210 "passthru": { 00:04:11.210 "base_bdev_name": "Malloc0", 00:04:11.210 "name": "Passthru0" 00:04:11.210 } 00:04:11.210 }, 00:04:11.210 "memory_domains": [ 00:04:11.210 { 00:04:11.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.210 "dma_device_type": 2 00:04:11.210 } 00:04:11.210 ], 00:04:11.210 "name": "Passthru0", 00:04:11.210 "num_blocks": 16384, 00:04:11.210 "product_name": "passthru", 00:04:11.210 "supported_io_types": { 00:04:11.210 "abort": true, 00:04:11.210 "compare": false, 00:04:11.210 "compare_and_write": false, 00:04:11.210 "flush": true, 00:04:11.210 "nvme_admin": false, 00:04:11.210 "nvme_io": false, 00:04:11.210 "read": true, 00:04:11.210 "reset": true, 00:04:11.210 "unmap": true, 00:04:11.210 "write": true, 00:04:11.210 "write_zeroes": true 00:04:11.210 }, 00:04:11.210 "uuid": "033f1c62-0d1b-55dc-9bb4-1999de2089c0", 00:04:11.210 "zoned": false 00:04:11.210 } 00:04:11.210 ]' 00:04:11.210 22:04:07 -- rpc/rpc.sh@21 -- # jq length 00:04:11.210 22:04:07 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.210 22:04:07 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.210 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.210 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.210 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.210 22:04:07 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:11.210 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.210 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.470 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.470 22:04:07 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.470 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.470 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.470 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.470 22:04:07 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.470 22:04:07 -- rpc/rpc.sh@26 -- # jq length 00:04:11.470 ************************************ 00:04:11.470 END TEST rpc_integrity 00:04:11.470 ************************************ 00:04:11.470 22:04:07 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.470 00:04:11.470 real 0m0.323s 00:04:11.470 user 0m0.215s 00:04:11.470 sys 0m0.034s 00:04:11.470 22:04:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.470 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.470 22:04:07 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:11.470 22:04:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.470 22:04:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.470 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.470 ************************************ 00:04:11.470 START TEST rpc_plugins 00:04:11.470 ************************************ 00:04:11.470 22:04:07 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:11.470 22:04:07 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:11.470 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.470 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.470 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.470 22:04:07 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:11.470 22:04:07 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:11.470 22:04:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.470 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:04:11.470 22:04:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.470 22:04:07 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:11.470 { 00:04:11.470 "aliases": [ 00:04:11.470 "bfea8751-9508-4590-9c4c-24798f7d1d01" 00:04:11.470 ], 00:04:11.470 "assigned_rate_limits": { 00:04:11.470 "r_mbytes_per_sec": 0, 00:04:11.470 "rw_ios_per_sec": 0, 00:04:11.470 "rw_mbytes_per_sec": 0, 00:04:11.470 "w_mbytes_per_sec": 0 00:04:11.470 }, 00:04:11.470 "block_size": 4096, 00:04:11.470 "claimed": false, 00:04:11.470 "driver_specific": {}, 00:04:11.470 "memory_domains": [ 00:04:11.470 { 00:04:11.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.470 "dma_device_type": 2 00:04:11.470 } 00:04:11.470 ], 00:04:11.470 "name": "Malloc1", 00:04:11.470 "num_blocks": 256, 00:04:11.470 "product_name": "Malloc disk", 00:04:11.470 "supported_io_types": { 00:04:11.470 "abort": true, 00:04:11.470 "compare": false, 00:04:11.470 "compare_and_write": false, 00:04:11.470 "flush": true, 00:04:11.470 "nvme_admin": false, 00:04:11.470 "nvme_io": false, 00:04:11.470 "read": true, 00:04:11.470 "reset": true, 00:04:11.470 "unmap": true, 00:04:11.470 "write": true, 00:04:11.470 "write_zeroes": true 00:04:11.470 }, 00:04:11.470 "uuid": "bfea8751-9508-4590-9c4c-24798f7d1d01", 00:04:11.470 "zoned": false 00:04:11.470 } 00:04:11.470 ]' 00:04:11.470 22:04:07 -- rpc/rpc.sh@32 -- # jq length 00:04:11.470 22:04:08 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:11.470 22:04:08 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:11.470 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.470 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:11.470 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.470 22:04:08 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:11.470 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.470 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:11.470 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.470 22:04:08 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:11.470 22:04:08 -- rpc/rpc.sh@36 -- # jq length 00:04:11.730 ************************************ 00:04:11.730 END TEST rpc_plugins 00:04:11.730 ************************************ 00:04:11.730 22:04:08 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:11.730 00:04:11.730 real 0m0.164s 00:04:11.730 user 0m0.102s 00:04:11.730 sys 0m0.023s 00:04:11.730 22:04:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.730 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:11.730 22:04:08 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.730 22:04:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.730 22:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.730 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:11.730 ************************************ 00:04:11.730 START TEST rpc_trace_cmd_test 00:04:11.730 ************************************ 00:04:11.730 22:04:08 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:11.730 22:04:08 -- rpc/rpc.sh@40 -- # local info 00:04:11.730 22:04:08 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:11.730 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.730 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:11.730 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.730 22:04:08 -- rpc/rpc.sh@42 -- # info='{ 00:04:11.730 "bdev": { 00:04:11.730 "mask": "0x8", 00:04:11.730 "tpoint_mask": "0xffffffffffffffff" 00:04:11.730 }, 00:04:11.730 "bdev_nvme": { 00:04:11.730 "mask": "0x4000", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "blobfs": { 00:04:11.730 "mask": "0x80", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "dsa": { 00:04:11.730 "mask": "0x200", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "ftl": { 00:04:11.730 "mask": "0x40", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "iaa": { 00:04:11.730 "mask": "0x1000", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "iscsi_conn": { 00:04:11.730 "mask": "0x2", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "nvme_pcie": { 00:04:11.730 "mask": "0x800", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "nvme_tcp": { 00:04:11.730 "mask": "0x2000", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "nvmf_rdma": { 00:04:11.730 "mask": "0x10", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "nvmf_tcp": { 00:04:11.730 "mask": "0x20", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "scsi": { 00:04:11.730 "mask": "0x4", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "thread": { 00:04:11.730 "mask": "0x400", 00:04:11.730 "tpoint_mask": "0x0" 00:04:11.730 }, 00:04:11.730 "tpoint_group_mask": "0x8", 00:04:11.730 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55509" 00:04:11.730 }' 00:04:11.730 22:04:08 -- rpc/rpc.sh@43 -- # jq length 00:04:11.730 22:04:08 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:11.730 22:04:08 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:11.730 22:04:08 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:11.730 22:04:08 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:11.730 22:04:08 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:11.730 22:04:08 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:11.989 22:04:08 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:11.989 22:04:08 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:11.989 ************************************ 00:04:11.989 END TEST rpc_trace_cmd_test 00:04:11.989 ************************************ 00:04:11.989 22:04:08 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:11.989 00:04:11.989 real 0m0.273s 00:04:11.989 user 0m0.233s 00:04:11.989 sys 0m0.032s 00:04:11.989 22:04:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.989 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:11.989 22:04:08 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:11.989 22:04:08 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:11.989 22:04:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.989 22:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.989 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:11.989 ************************************ 00:04:11.989 START TEST go_rpc 00:04:11.989 ************************************ 00:04:11.989 22:04:08 -- common/autotest_common.sh@1114 -- # go_rpc 00:04:11.989 22:04:08 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:11.989 22:04:08 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:11.989 22:04:08 -- rpc/rpc.sh@52 -- # jq length 00:04:11.989 22:04:08 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:11.989 22:04:08 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.989 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.989 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:11.989 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.989 22:04:08 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:11.989 22:04:08 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:11.989 22:04:08 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["404bbbb8-292e-4c51-87ef-042e2829ff46"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"404bbbb8-292e-4c51-87ef-042e2829ff46","zoned":false}]' 00:04:11.989 22:04:08 -- rpc/rpc.sh@57 -- # jq length 00:04:12.248 22:04:08 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:12.248 22:04:08 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:12.248 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.248 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.248 22:04:08 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:12.248 22:04:08 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:12.248 22:04:08 -- rpc/rpc.sh@61 -- # jq length 00:04:12.248 ************************************ 00:04:12.248 END TEST go_rpc 00:04:12.248 ************************************ 00:04:12.248 22:04:08 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:12.248 00:04:12.248 real 0m0.223s 00:04:12.248 user 0m0.154s 00:04:12.248 sys 0m0.037s 00:04:12.248 22:04:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:12.248 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 22:04:08 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:12.248 22:04:08 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:12.248 22:04:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.248 22:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.248 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 ************************************ 00:04:12.248 START TEST rpc_daemon_integrity 00:04:12.248 ************************************ 00:04:12.248 22:04:08 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:12.248 22:04:08 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.248 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.248 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.248 22:04:08 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.248 22:04:08 -- rpc/rpc.sh@13 -- # jq length 00:04:12.248 22:04:08 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.248 22:04:08 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.248 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.248 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:12.248 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.248 22:04:08 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:12.248 22:04:08 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.248 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.248 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:12.509 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.509 22:04:08 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.509 { 00:04:12.509 "aliases": [ 00:04:12.509 "9bbb9503-1c0d-4c94-a8b7-5d13ee58d101" 00:04:12.509 ], 00:04:12.509 "assigned_rate_limits": { 00:04:12.509 "r_mbytes_per_sec": 0, 00:04:12.509 "rw_ios_per_sec": 0, 00:04:12.509 "rw_mbytes_per_sec": 0, 00:04:12.509 "w_mbytes_per_sec": 0 00:04:12.509 }, 00:04:12.509 "block_size": 512, 00:04:12.509 "claimed": false, 00:04:12.509 "driver_specific": {}, 00:04:12.509 "memory_domains": [ 00:04:12.509 { 00:04:12.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.509 "dma_device_type": 2 00:04:12.509 } 00:04:12.509 ], 00:04:12.509 "name": "Malloc3", 00:04:12.509 "num_blocks": 16384, 00:04:12.509 "product_name": "Malloc disk", 00:04:12.509 "supported_io_types": { 00:04:12.509 "abort": true, 00:04:12.509 "compare": false, 00:04:12.509 "compare_and_write": false, 00:04:12.509 "flush": true, 00:04:12.509 "nvme_admin": false, 00:04:12.509 "nvme_io": false, 00:04:12.509 "read": true, 00:04:12.509 "reset": true, 00:04:12.509 "unmap": true, 00:04:12.509 "write": true, 00:04:12.509 "write_zeroes": true 00:04:12.509 }, 00:04:12.509 "uuid": "9bbb9503-1c0d-4c94-a8b7-5d13ee58d101", 00:04:12.509 "zoned": false 00:04:12.509 } 00:04:12.509 ]' 00:04:12.509 22:04:08 -- rpc/rpc.sh@17 -- # jq length 00:04:12.509 22:04:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.509 22:04:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:12.509 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.509 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:12.509 [2024-11-17 22:04:08.920083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:12.509 [2024-11-17 22:04:08.920136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.509 [2024-11-17 22:04:08.920155] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2432680 00:04:12.509 [2024-11-17 22:04:08.920165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.509 [2024-11-17 22:04:08.921404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.509 [2024-11-17 22:04:08.921434] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.509 Passthru0 00:04:12.509 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.509 22:04:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.509 22:04:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.509 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:04:12.509 22:04:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.509 22:04:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.509 { 00:04:12.509 "aliases": [ 00:04:12.509 "9bbb9503-1c0d-4c94-a8b7-5d13ee58d101" 00:04:12.509 ], 00:04:12.509 "assigned_rate_limits": { 00:04:12.509 "r_mbytes_per_sec": 0, 00:04:12.509 "rw_ios_per_sec": 0, 00:04:12.509 "rw_mbytes_per_sec": 0, 00:04:12.509 "w_mbytes_per_sec": 0 00:04:12.509 }, 00:04:12.509 "block_size": 512, 00:04:12.509 "claim_type": "exclusive_write", 00:04:12.509 "claimed": true, 00:04:12.509 "driver_specific": {}, 00:04:12.509 "memory_domains": [ 00:04:12.509 { 00:04:12.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.509 "dma_device_type": 2 00:04:12.509 } 00:04:12.509 ], 00:04:12.509 "name": "Malloc3", 00:04:12.509 "num_blocks": 16384, 00:04:12.509 "product_name": "Malloc disk", 00:04:12.509 "supported_io_types": { 00:04:12.509 "abort": true, 00:04:12.509 "compare": false, 00:04:12.509 "compare_and_write": false, 00:04:12.509 "flush": true, 00:04:12.509 "nvme_admin": false, 00:04:12.509 "nvme_io": false, 00:04:12.509 "read": true, 00:04:12.509 "reset": true, 00:04:12.509 "unmap": true, 00:04:12.509 "write": true, 00:04:12.509 "write_zeroes": true 00:04:12.509 }, 00:04:12.509 "uuid": "9bbb9503-1c0d-4c94-a8b7-5d13ee58d101", 00:04:12.509 "zoned": false 00:04:12.509 }, 00:04:12.509 { 00:04:12.509 "aliases": [ 00:04:12.509 "7c925a8a-c245-55bc-9d49-5fabebb40fe0" 00:04:12.509 ], 00:04:12.509 "assigned_rate_limits": { 00:04:12.509 "r_mbytes_per_sec": 0, 00:04:12.509 "rw_ios_per_sec": 0, 00:04:12.509 "rw_mbytes_per_sec": 0, 00:04:12.509 "w_mbytes_per_sec": 0 00:04:12.509 }, 00:04:12.509 "block_size": 512, 00:04:12.509 "claimed": false, 00:04:12.509 "driver_specific": { 00:04:12.509 "passthru": { 00:04:12.509 "base_bdev_name": "Malloc3", 00:04:12.509 "name": "Passthru0" 00:04:12.509 } 00:04:12.509 }, 00:04:12.509 "memory_domains": [ 00:04:12.509 { 00:04:12.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.509 "dma_device_type": 2 00:04:12.509 } 00:04:12.509 ], 00:04:12.509 "name": "Passthru0", 00:04:12.509 "num_blocks": 16384, 00:04:12.509 "product_name": "passthru", 00:04:12.509 "supported_io_types": { 00:04:12.509 "abort": true, 00:04:12.509 "compare": false, 00:04:12.509 "compare_and_write": false, 00:04:12.509 "flush": true, 00:04:12.509 "nvme_admin": false, 00:04:12.509 "nvme_io": false, 00:04:12.509 "read": true, 00:04:12.509 "reset": true, 00:04:12.509 "unmap": true, 00:04:12.509 "write": true, 00:04:12.509 "write_zeroes": true 00:04:12.509 }, 00:04:12.509 "uuid": "7c925a8a-c245-55bc-9d49-5fabebb40fe0", 00:04:12.509 "zoned": false 00:04:12.509 } 00:04:12.509 ]' 00:04:12.509 22:04:08 -- rpc/rpc.sh@21 -- # jq length 00:04:12.509 22:04:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.509 22:04:09 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.509 22:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.509 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:12.509 22:04:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.509 22:04:09 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:12.509 22:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.509 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:12.509 22:04:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.509 22:04:09 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.509 22:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.509 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:12.509 22:04:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.509 22:04:09 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.509 22:04:09 -- rpc/rpc.sh@26 -- # jq length 00:04:12.509 ************************************ 00:04:12.509 END TEST rpc_daemon_integrity 00:04:12.509 ************************************ 00:04:12.509 22:04:09 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.509 00:04:12.509 real 0m0.311s 00:04:12.509 user 0m0.206s 00:04:12.509 sys 0m0.036s 00:04:12.509 22:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:12.509 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:12.768 22:04:09 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:12.768 22:04:09 -- rpc/rpc.sh@84 -- # killprocess 55509 00:04:12.768 22:04:09 -- common/autotest_common.sh@936 -- # '[' -z 55509 ']' 00:04:12.768 22:04:09 -- common/autotest_common.sh@940 -- # kill -0 55509 00:04:12.768 22:04:09 -- common/autotest_common.sh@941 -- # uname 00:04:12.768 22:04:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:12.768 22:04:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55509 00:04:12.768 killing process with pid 55509 00:04:12.768 22:04:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:12.768 22:04:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:12.768 22:04:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55509' 00:04:12.768 22:04:09 -- common/autotest_common.sh@955 -- # kill 55509 00:04:12.768 22:04:09 -- common/autotest_common.sh@960 -- # wait 55509 00:04:13.337 00:04:13.337 real 0m3.373s 00:04:13.337 user 0m4.400s 00:04:13.337 sys 0m0.757s 00:04:13.337 22:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:13.337 ************************************ 00:04:13.337 END TEST rpc 00:04:13.337 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:13.337 ************************************ 00:04:13.337 22:04:09 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:13.337 22:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.337 22:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.337 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:13.337 ************************************ 00:04:13.337 START TEST rpc_client 00:04:13.337 ************************************ 00:04:13.337 22:04:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:13.337 * Looking for test storage... 00:04:13.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:13.337 22:04:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:13.337 22:04:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:13.337 22:04:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:13.337 22:04:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:13.337 22:04:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:13.337 22:04:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:13.337 22:04:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:13.337 22:04:09 -- scripts/common.sh@335 -- # IFS=.-: 00:04:13.337 22:04:09 -- scripts/common.sh@335 -- # read -ra ver1 00:04:13.337 22:04:09 -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.337 22:04:09 -- scripts/common.sh@336 -- # read -ra ver2 00:04:13.337 22:04:09 -- scripts/common.sh@337 -- # local 'op=<' 00:04:13.337 22:04:09 -- scripts/common.sh@339 -- # ver1_l=2 00:04:13.337 22:04:09 -- scripts/common.sh@340 -- # ver2_l=1 00:04:13.337 22:04:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:13.337 22:04:09 -- scripts/common.sh@343 -- # case "$op" in 00:04:13.337 22:04:09 -- scripts/common.sh@344 -- # : 1 00:04:13.337 22:04:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:13.337 22:04:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.337 22:04:09 -- scripts/common.sh@364 -- # decimal 1 00:04:13.337 22:04:09 -- scripts/common.sh@352 -- # local d=1 00:04:13.337 22:04:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.337 22:04:09 -- scripts/common.sh@354 -- # echo 1 00:04:13.337 22:04:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:13.337 22:04:09 -- scripts/common.sh@365 -- # decimal 2 00:04:13.337 22:04:09 -- scripts/common.sh@352 -- # local d=2 00:04:13.337 22:04:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.337 22:04:09 -- scripts/common.sh@354 -- # echo 2 00:04:13.337 22:04:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:13.337 22:04:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:13.337 22:04:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:13.337 22:04:09 -- scripts/common.sh@367 -- # return 0 00:04:13.337 22:04:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.337 22:04:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.337 --rc genhtml_branch_coverage=1 00:04:13.337 --rc genhtml_function_coverage=1 00:04:13.337 --rc genhtml_legend=1 00:04:13.337 --rc geninfo_all_blocks=1 00:04:13.337 --rc geninfo_unexecuted_blocks=1 00:04:13.337 00:04:13.337 ' 00:04:13.337 22:04:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.337 --rc genhtml_branch_coverage=1 00:04:13.337 --rc genhtml_function_coverage=1 00:04:13.337 --rc genhtml_legend=1 00:04:13.337 --rc geninfo_all_blocks=1 00:04:13.337 --rc geninfo_unexecuted_blocks=1 00:04:13.337 00:04:13.337 ' 00:04:13.337 22:04:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.337 --rc genhtml_branch_coverage=1 00:04:13.337 --rc genhtml_function_coverage=1 00:04:13.337 --rc genhtml_legend=1 00:04:13.337 --rc geninfo_all_blocks=1 00:04:13.337 --rc geninfo_unexecuted_blocks=1 00:04:13.337 00:04:13.337 ' 00:04:13.337 22:04:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:13.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.337 --rc genhtml_branch_coverage=1 00:04:13.337 --rc genhtml_function_coverage=1 00:04:13.337 --rc genhtml_legend=1 00:04:13.337 --rc geninfo_all_blocks=1 00:04:13.337 --rc geninfo_unexecuted_blocks=1 00:04:13.337 00:04:13.337 ' 00:04:13.337 22:04:09 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:13.337 OK 00:04:13.337 22:04:09 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:13.337 00:04:13.337 real 0m0.182s 00:04:13.337 user 0m0.106s 00:04:13.337 sys 0m0.087s 00:04:13.337 22:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:13.337 ************************************ 00:04:13.337 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:13.337 END TEST rpc_client 00:04:13.337 ************************************ 00:04:13.597 22:04:09 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:13.597 22:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.597 22:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.597 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:13.597 ************************************ 00:04:13.597 START TEST json_config 00:04:13.597 ************************************ 00:04:13.597 22:04:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:13.597 22:04:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:13.597 22:04:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:13.597 22:04:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:13.597 22:04:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:13.597 22:04:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:13.597 22:04:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:13.597 22:04:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:13.597 22:04:10 -- scripts/common.sh@335 -- # IFS=.-: 00:04:13.597 22:04:10 -- scripts/common.sh@335 -- # read -ra ver1 00:04:13.597 22:04:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.597 22:04:10 -- scripts/common.sh@336 -- # read -ra ver2 00:04:13.597 22:04:10 -- scripts/common.sh@337 -- # local 'op=<' 00:04:13.597 22:04:10 -- scripts/common.sh@339 -- # ver1_l=2 00:04:13.597 22:04:10 -- scripts/common.sh@340 -- # ver2_l=1 00:04:13.597 22:04:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:13.597 22:04:10 -- scripts/common.sh@343 -- # case "$op" in 00:04:13.597 22:04:10 -- scripts/common.sh@344 -- # : 1 00:04:13.597 22:04:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:13.597 22:04:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.597 22:04:10 -- scripts/common.sh@364 -- # decimal 1 00:04:13.597 22:04:10 -- scripts/common.sh@352 -- # local d=1 00:04:13.597 22:04:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.597 22:04:10 -- scripts/common.sh@354 -- # echo 1 00:04:13.597 22:04:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:13.597 22:04:10 -- scripts/common.sh@365 -- # decimal 2 00:04:13.597 22:04:10 -- scripts/common.sh@352 -- # local d=2 00:04:13.597 22:04:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.597 22:04:10 -- scripts/common.sh@354 -- # echo 2 00:04:13.597 22:04:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:13.597 22:04:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:13.597 22:04:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:13.597 22:04:10 -- scripts/common.sh@367 -- # return 0 00:04:13.597 22:04:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.597 22:04:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:13.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.597 --rc genhtml_branch_coverage=1 00:04:13.597 --rc genhtml_function_coverage=1 00:04:13.597 --rc genhtml_legend=1 00:04:13.597 --rc geninfo_all_blocks=1 00:04:13.597 --rc geninfo_unexecuted_blocks=1 00:04:13.597 00:04:13.597 ' 00:04:13.597 22:04:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:13.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.598 --rc genhtml_branch_coverage=1 00:04:13.598 --rc genhtml_function_coverage=1 00:04:13.598 --rc genhtml_legend=1 00:04:13.598 --rc geninfo_all_blocks=1 00:04:13.598 --rc geninfo_unexecuted_blocks=1 00:04:13.598 00:04:13.598 ' 00:04:13.598 22:04:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:13.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.598 --rc genhtml_branch_coverage=1 00:04:13.598 --rc genhtml_function_coverage=1 00:04:13.598 --rc genhtml_legend=1 00:04:13.598 --rc geninfo_all_blocks=1 00:04:13.598 --rc geninfo_unexecuted_blocks=1 00:04:13.598 00:04:13.598 ' 00:04:13.598 22:04:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:13.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.598 --rc genhtml_branch_coverage=1 00:04:13.598 --rc genhtml_function_coverage=1 00:04:13.598 --rc genhtml_legend=1 00:04:13.598 --rc geninfo_all_blocks=1 00:04:13.598 --rc geninfo_unexecuted_blocks=1 00:04:13.598 00:04:13.598 ' 00:04:13.598 22:04:10 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:13.598 22:04:10 -- nvmf/common.sh@7 -- # uname -s 00:04:13.598 22:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.598 22:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.598 22:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.598 22:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.598 22:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.598 22:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.598 22:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.598 22:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.598 22:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.598 22:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.598 22:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:04:13.598 22:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:04:13.598 22:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.598 22:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.598 22:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:13.598 22:04:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:13.598 22:04:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.598 22:04:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.598 22:04:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.598 22:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.598 22:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.598 22:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.598 22:04:10 -- paths/export.sh@5 -- # export PATH 00:04:13.598 22:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.598 22:04:10 -- nvmf/common.sh@46 -- # : 0 00:04:13.598 22:04:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:13.598 22:04:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:13.598 22:04:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:13.598 22:04:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.598 22:04:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.598 22:04:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:13.598 22:04:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:13.598 22:04:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:13.598 22:04:10 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:13.598 22:04:10 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:13.598 22:04:10 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:13.598 22:04:10 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:13.598 22:04:10 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:13.598 22:04:10 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:13.598 22:04:10 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:13.598 22:04:10 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:13.598 22:04:10 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:13.598 22:04:10 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:13.598 22:04:10 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:13.598 22:04:10 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:13.598 22:04:10 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:13.598 22:04:10 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:13.598 INFO: JSON configuration test init 00:04:13.598 22:04:10 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:13.598 22:04:10 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:13.598 22:04:10 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:13.598 22:04:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.598 22:04:10 -- common/autotest_common.sh@10 -- # set +x 00:04:13.598 22:04:10 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:13.598 22:04:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.598 22:04:10 -- common/autotest_common.sh@10 -- # set +x 00:04:13.598 22:04:10 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:13.598 22:04:10 -- json_config/json_config.sh@98 -- # local app=target 00:04:13.598 22:04:10 -- json_config/json_config.sh@99 -- # shift 00:04:13.598 22:04:10 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:13.598 22:04:10 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:13.598 22:04:10 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:13.598 22:04:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:13.598 22:04:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:13.598 22:04:10 -- json_config/json_config.sh@111 -- # app_pid[$app]=55825 00:04:13.598 Waiting for target to run... 00:04:13.598 22:04:10 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:13.598 22:04:10 -- json_config/json_config.sh@114 -- # waitforlisten 55825 /var/tmp/spdk_tgt.sock 00:04:13.598 22:04:10 -- common/autotest_common.sh@829 -- # '[' -z 55825 ']' 00:04:13.598 22:04:10 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:13.598 22:04:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.598 22:04:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.598 22:04:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.598 22:04:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.598 22:04:10 -- common/autotest_common.sh@10 -- # set +x 00:04:13.857 [2024-11-17 22:04:10.253600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:13.857 [2024-11-17 22:04:10.253715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55825 ] 00:04:14.424 [2024-11-17 22:04:10.767590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.424 [2024-11-17 22:04:10.848779] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:14.425 [2024-11-17 22:04:10.848939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.684 22:04:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.684 22:04:11 -- common/autotest_common.sh@862 -- # return 0 00:04:14.684 00:04:14.684 22:04:11 -- json_config/json_config.sh@115 -- # echo '' 00:04:14.684 22:04:11 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:14.684 22:04:11 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:14.684 22:04:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.684 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:04:14.684 22:04:11 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:14.684 22:04:11 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:14.684 22:04:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.684 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:04:14.684 22:04:11 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:14.684 22:04:11 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:14.684 22:04:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:15.251 22:04:11 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:15.251 22:04:11 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:15.251 22:04:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.251 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:04:15.251 22:04:11 -- json_config/json_config.sh@48 -- # local ret=0 00:04:15.252 22:04:11 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:15.252 22:04:11 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:15.252 22:04:11 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:15.252 22:04:11 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:15.252 22:04:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:15.510 22:04:12 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:15.510 22:04:12 -- json_config/json_config.sh@51 -- # local get_types 00:04:15.510 22:04:12 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:15.510 22:04:12 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:15.510 22:04:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:15.510 22:04:12 -- common/autotest_common.sh@10 -- # set +x 00:04:15.510 22:04:12 -- json_config/json_config.sh@58 -- # return 0 00:04:15.510 22:04:12 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:15.510 22:04:12 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:15.510 22:04:12 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:15.510 22:04:12 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:15.510 22:04:12 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:15.510 22:04:12 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:15.510 22:04:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.510 22:04:12 -- common/autotest_common.sh@10 -- # set +x 00:04:15.510 22:04:12 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:15.510 22:04:12 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:15.510 22:04:12 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:15.510 22:04:12 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.510 22:04:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.769 MallocForNvmf0 00:04:15.769 22:04:12 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.769 22:04:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:16.338 MallocForNvmf1 00:04:16.338 22:04:12 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.338 22:04:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.338 [2024-11-17 22:04:12.899459] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.338 22:04:12 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.338 22:04:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.596 22:04:13 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.596 22:04:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.855 22:04:13 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.855 22:04:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:17.114 22:04:13 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.114 22:04:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.373 [2024-11-17 22:04:13.783917] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.373 22:04:13 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:17.373 22:04:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.373 22:04:13 -- common/autotest_common.sh@10 -- # set +x 00:04:17.373 22:04:13 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:17.373 22:04:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.373 22:04:13 -- common/autotest_common.sh@10 -- # set +x 00:04:17.373 22:04:13 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:17.373 22:04:13 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.373 22:04:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.656 MallocBdevForConfigChangeCheck 00:04:17.656 22:04:14 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:17.656 22:04:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.656 22:04:14 -- common/autotest_common.sh@10 -- # set +x 00:04:17.656 22:04:14 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:17.656 22:04:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:17.933 INFO: shutting down applications... 00:04:17.933 22:04:14 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:17.933 22:04:14 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:17.933 22:04:14 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:17.933 22:04:14 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:17.933 22:04:14 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:18.191 Calling clear_iscsi_subsystem 00:04:18.191 Calling clear_nvmf_subsystem 00:04:18.191 Calling clear_nbd_subsystem 00:04:18.191 Calling clear_ublk_subsystem 00:04:18.191 Calling clear_vhost_blk_subsystem 00:04:18.191 Calling clear_vhost_scsi_subsystem 00:04:18.191 Calling clear_scheduler_subsystem 00:04:18.191 Calling clear_bdev_subsystem 00:04:18.191 Calling clear_accel_subsystem 00:04:18.191 Calling clear_vmd_subsystem 00:04:18.191 Calling clear_sock_subsystem 00:04:18.191 Calling clear_iobuf_subsystem 00:04:18.449 22:04:14 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:18.449 22:04:14 -- json_config/json_config.sh@396 -- # count=100 00:04:18.449 22:04:14 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:18.449 22:04:14 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:18.449 22:04:14 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.449 22:04:14 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:18.708 22:04:15 -- json_config/json_config.sh@398 -- # break 00:04:18.708 22:04:15 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:18.708 22:04:15 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:18.708 22:04:15 -- json_config/json_config.sh@120 -- # local app=target 00:04:18.708 22:04:15 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:18.708 22:04:15 -- json_config/json_config.sh@124 -- # [[ -n 55825 ]] 00:04:18.708 22:04:15 -- json_config/json_config.sh@127 -- # kill -SIGINT 55825 00:04:18.708 22:04:15 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:18.708 22:04:15 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:18.708 22:04:15 -- json_config/json_config.sh@130 -- # kill -0 55825 00:04:18.708 22:04:15 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:19.276 22:04:15 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:19.276 22:04:15 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:19.276 22:04:15 -- json_config/json_config.sh@130 -- # kill -0 55825 00:04:19.276 22:04:15 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:19.276 22:04:15 -- json_config/json_config.sh@132 -- # break 00:04:19.276 SPDK target shutdown done 00:04:19.276 22:04:15 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:19.276 22:04:15 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:19.276 INFO: relaunching applications... 00:04:19.276 22:04:15 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:19.276 22:04:15 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:19.276 22:04:15 -- json_config/json_config.sh@98 -- # local app=target 00:04:19.276 22:04:15 -- json_config/json_config.sh@99 -- # shift 00:04:19.276 22:04:15 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:19.276 22:04:15 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:19.276 22:04:15 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:19.276 22:04:15 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:19.276 22:04:15 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:19.276 22:04:15 -- json_config/json_config.sh@111 -- # app_pid[$app]=56105 00:04:19.276 Waiting for target to run... 00:04:19.276 22:04:15 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:19.276 22:04:15 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:19.276 22:04:15 -- json_config/json_config.sh@114 -- # waitforlisten 56105 /var/tmp/spdk_tgt.sock 00:04:19.276 22:04:15 -- common/autotest_common.sh@829 -- # '[' -z 56105 ']' 00:04:19.276 22:04:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.276 22:04:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.276 22:04:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.276 22:04:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.276 22:04:15 -- common/autotest_common.sh@10 -- # set +x 00:04:19.276 [2024-11-17 22:04:15.717367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:19.276 [2024-11-17 22:04:15.717465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56105 ] 00:04:19.843 [2024-11-17 22:04:16.214419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.843 [2024-11-17 22:04:16.297722] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:19.843 [2024-11-17 22:04:16.297902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.101 [2024-11-17 22:04:16.616661] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.101 [2024-11-17 22:04:16.648788] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:21.037 22:04:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:21.037 22:04:17 -- common/autotest_common.sh@862 -- # return 0 00:04:21.037 00:04:21.037 22:04:17 -- json_config/json_config.sh@115 -- # echo '' 00:04:21.037 22:04:17 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:21.037 INFO: Checking if target configuration is the same... 00:04:21.037 22:04:17 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:21.037 22:04:17 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.037 22:04:17 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:21.037 22:04:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.037 + '[' 2 -ne 2 ']' 00:04:21.037 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:21.037 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:21.037 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:21.037 +++ basename /dev/fd/62 00:04:21.037 ++ mktemp /tmp/62.XXX 00:04:21.037 + tmp_file_1=/tmp/62.9iv 00:04:21.037 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.037 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.037 + tmp_file_2=/tmp/spdk_tgt_config.json.rPQ 00:04:21.037 + ret=0 00:04:21.037 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.296 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.296 + diff -u /tmp/62.9iv /tmp/spdk_tgt_config.json.rPQ 00:04:21.296 INFO: JSON config files are the same 00:04:21.296 + echo 'INFO: JSON config files are the same' 00:04:21.296 + rm /tmp/62.9iv /tmp/spdk_tgt_config.json.rPQ 00:04:21.296 + exit 0 00:04:21.296 22:04:17 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:21.296 22:04:17 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:21.296 INFO: changing configuration and checking if this can be detected... 00:04:21.296 22:04:17 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.296 22:04:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.554 22:04:17 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.555 22:04:17 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:21.555 22:04:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.555 + '[' 2 -ne 2 ']' 00:04:21.555 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:21.555 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:21.555 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:21.555 +++ basename /dev/fd/62 00:04:21.555 ++ mktemp /tmp/62.XXX 00:04:21.555 + tmp_file_1=/tmp/62.kXv 00:04:21.555 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.555 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.555 + tmp_file_2=/tmp/spdk_tgt_config.json.oda 00:04:21.555 + ret=0 00:04:21.555 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.814 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.073 + diff -u /tmp/62.kXv /tmp/spdk_tgt_config.json.oda 00:04:22.073 + ret=1 00:04:22.073 + echo '=== Start of file: /tmp/62.kXv ===' 00:04:22.073 + cat /tmp/62.kXv 00:04:22.073 + echo '=== End of file: /tmp/62.kXv ===' 00:04:22.073 + echo '' 00:04:22.073 + echo '=== Start of file: /tmp/spdk_tgt_config.json.oda ===' 00:04:22.073 + cat /tmp/spdk_tgt_config.json.oda 00:04:22.073 + echo '=== End of file: /tmp/spdk_tgt_config.json.oda ===' 00:04:22.073 + echo '' 00:04:22.073 + rm /tmp/62.kXv /tmp/spdk_tgt_config.json.oda 00:04:22.073 + exit 1 00:04:22.073 INFO: configuration change detected. 00:04:22.073 22:04:18 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:22.073 22:04:18 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:22.073 22:04:18 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:22.073 22:04:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.073 22:04:18 -- common/autotest_common.sh@10 -- # set +x 00:04:22.073 22:04:18 -- json_config/json_config.sh@360 -- # local ret=0 00:04:22.073 22:04:18 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:22.073 22:04:18 -- json_config/json_config.sh@370 -- # [[ -n 56105 ]] 00:04:22.073 22:04:18 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:22.073 22:04:18 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:22.073 22:04:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.073 22:04:18 -- common/autotest_common.sh@10 -- # set +x 00:04:22.073 22:04:18 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:22.073 22:04:18 -- json_config/json_config.sh@246 -- # uname -s 00:04:22.073 22:04:18 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:22.073 22:04:18 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:22.073 22:04:18 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:22.073 22:04:18 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:22.073 22:04:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.073 22:04:18 -- common/autotest_common.sh@10 -- # set +x 00:04:22.073 22:04:18 -- json_config/json_config.sh@376 -- # killprocess 56105 00:04:22.073 22:04:18 -- common/autotest_common.sh@936 -- # '[' -z 56105 ']' 00:04:22.073 22:04:18 -- common/autotest_common.sh@940 -- # kill -0 56105 00:04:22.073 22:04:18 -- common/autotest_common.sh@941 -- # uname 00:04:22.073 22:04:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:22.073 22:04:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56105 00:04:22.073 22:04:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:22.073 22:04:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:22.073 killing process with pid 56105 00:04:22.073 22:04:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56105' 00:04:22.073 22:04:18 -- common/autotest_common.sh@955 -- # kill 56105 00:04:22.073 22:04:18 -- common/autotest_common.sh@960 -- # wait 56105 00:04:22.332 22:04:18 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.332 22:04:18 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:22.332 22:04:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.332 22:04:18 -- common/autotest_common.sh@10 -- # set +x 00:04:22.590 22:04:18 -- json_config/json_config.sh@381 -- # return 0 00:04:22.590 INFO: Success 00:04:22.590 22:04:18 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:22.590 00:04:22.590 real 0m8.991s 00:04:22.590 user 0m12.402s 00:04:22.590 sys 0m2.054s 00:04:22.590 22:04:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:22.590 22:04:18 -- common/autotest_common.sh@10 -- # set +x 00:04:22.590 ************************************ 00:04:22.590 END TEST json_config 00:04:22.590 ************************************ 00:04:22.590 22:04:19 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.590 22:04:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.590 22:04:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.590 22:04:19 -- common/autotest_common.sh@10 -- # set +x 00:04:22.590 ************************************ 00:04:22.590 START TEST json_config_extra_key 00:04:22.590 ************************************ 00:04:22.590 22:04:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.590 22:04:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:22.591 22:04:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:22.591 22:04:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:22.850 22:04:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:22.850 22:04:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:22.850 22:04:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:22.850 22:04:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:22.850 22:04:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:22.850 22:04:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:22.850 22:04:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.850 22:04:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:22.850 22:04:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:22.850 22:04:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:22.850 22:04:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:22.850 22:04:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:22.850 22:04:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:22.850 22:04:19 -- scripts/common.sh@344 -- # : 1 00:04:22.850 22:04:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:22.850 22:04:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.850 22:04:19 -- scripts/common.sh@364 -- # decimal 1 00:04:22.850 22:04:19 -- scripts/common.sh@352 -- # local d=1 00:04:22.850 22:04:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.850 22:04:19 -- scripts/common.sh@354 -- # echo 1 00:04:22.850 22:04:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:22.850 22:04:19 -- scripts/common.sh@365 -- # decimal 2 00:04:22.850 22:04:19 -- scripts/common.sh@352 -- # local d=2 00:04:22.850 22:04:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.850 22:04:19 -- scripts/common.sh@354 -- # echo 2 00:04:22.850 22:04:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:22.850 22:04:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:22.850 22:04:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:22.850 22:04:19 -- scripts/common.sh@367 -- # return 0 00:04:22.850 22:04:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.850 22:04:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:22.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.850 --rc genhtml_branch_coverage=1 00:04:22.850 --rc genhtml_function_coverage=1 00:04:22.850 --rc genhtml_legend=1 00:04:22.850 --rc geninfo_all_blocks=1 00:04:22.850 --rc geninfo_unexecuted_blocks=1 00:04:22.850 00:04:22.850 ' 00:04:22.850 22:04:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:22.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.850 --rc genhtml_branch_coverage=1 00:04:22.850 --rc genhtml_function_coverage=1 00:04:22.850 --rc genhtml_legend=1 00:04:22.850 --rc geninfo_all_blocks=1 00:04:22.850 --rc geninfo_unexecuted_blocks=1 00:04:22.850 00:04:22.850 ' 00:04:22.850 22:04:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:22.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.850 --rc genhtml_branch_coverage=1 00:04:22.850 --rc genhtml_function_coverage=1 00:04:22.850 --rc genhtml_legend=1 00:04:22.850 --rc geninfo_all_blocks=1 00:04:22.850 --rc geninfo_unexecuted_blocks=1 00:04:22.850 00:04:22.850 ' 00:04:22.850 22:04:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:22.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.850 --rc genhtml_branch_coverage=1 00:04:22.850 --rc genhtml_function_coverage=1 00:04:22.850 --rc genhtml_legend=1 00:04:22.850 --rc geninfo_all_blocks=1 00:04:22.850 --rc geninfo_unexecuted_blocks=1 00:04:22.850 00:04:22.850 ' 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:22.850 22:04:19 -- nvmf/common.sh@7 -- # uname -s 00:04:22.850 22:04:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.850 22:04:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.850 22:04:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.850 22:04:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.850 22:04:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.850 22:04:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.850 22:04:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.850 22:04:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.850 22:04:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.850 22:04:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.850 22:04:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:04:22.850 22:04:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:04:22.850 22:04:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.850 22:04:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.850 22:04:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.850 22:04:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:22.850 22:04:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.850 22:04:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.850 22:04:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.850 22:04:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.850 22:04:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.850 22:04:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.850 22:04:19 -- paths/export.sh@5 -- # export PATH 00:04:22.850 22:04:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.850 22:04:19 -- nvmf/common.sh@46 -- # : 0 00:04:22.850 22:04:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:22.850 22:04:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:22.850 22:04:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:22.850 22:04:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.850 22:04:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.850 22:04:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:22.850 22:04:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:22.850 22:04:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.850 INFO: launching applications... 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:22.850 22:04:19 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:22.851 22:04:19 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:22.851 22:04:19 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:22.851 22:04:19 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:22.851 22:04:19 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56296 00:04:22.851 22:04:19 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:22.851 22:04:19 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:22.851 Waiting for target to run... 00:04:22.851 22:04:19 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56296 /var/tmp/spdk_tgt.sock 00:04:22.851 22:04:19 -- common/autotest_common.sh@829 -- # '[' -z 56296 ']' 00:04:22.851 22:04:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.851 22:04:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.851 22:04:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.851 22:04:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.851 22:04:19 -- common/autotest_common.sh@10 -- # set +x 00:04:22.851 [2024-11-17 22:04:19.360073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:22.851 [2024-11-17 22:04:19.360979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56296 ] 00:04:23.418 [2024-11-17 22:04:19.887246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.418 [2024-11-17 22:04:19.971811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:23.418 [2024-11-17 22:04:19.971976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.677 22:04:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.677 22:04:20 -- common/autotest_common.sh@862 -- # return 0 00:04:23.677 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:23.677 INFO: shutting down applications... 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56296 ]] 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56296 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56296 00:04:23.677 22:04:20 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:24.246 22:04:20 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:24.246 22:04:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:24.246 22:04:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56296 00:04:24.246 22:04:20 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:24.814 22:04:21 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:24.814 22:04:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:24.814 22:04:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56296 00:04:24.814 22:04:21 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:24.814 22:04:21 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:24.814 22:04:21 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:24.814 SPDK target shutdown done 00:04:24.814 22:04:21 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:24.814 Success 00:04:24.814 22:04:21 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:24.814 00:04:24.814 real 0m2.269s 00:04:24.814 user 0m1.672s 00:04:24.814 sys 0m0.583s 00:04:24.814 22:04:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:24.814 22:04:21 -- common/autotest_common.sh@10 -- # set +x 00:04:24.814 ************************************ 00:04:24.814 END TEST json_config_extra_key 00:04:24.814 ************************************ 00:04:24.814 22:04:21 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.814 22:04:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.814 22:04:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.814 22:04:21 -- common/autotest_common.sh@10 -- # set +x 00:04:24.814 ************************************ 00:04:24.814 START TEST alias_rpc 00:04:24.814 ************************************ 00:04:24.814 22:04:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.814 * Looking for test storage... 00:04:24.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:24.814 22:04:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:24.814 22:04:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:24.814 22:04:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:25.074 22:04:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:25.074 22:04:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:25.074 22:04:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:25.074 22:04:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:25.074 22:04:21 -- scripts/common.sh@335 -- # IFS=.-: 00:04:25.074 22:04:21 -- scripts/common.sh@335 -- # read -ra ver1 00:04:25.074 22:04:21 -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.074 22:04:21 -- scripts/common.sh@336 -- # read -ra ver2 00:04:25.074 22:04:21 -- scripts/common.sh@337 -- # local 'op=<' 00:04:25.074 22:04:21 -- scripts/common.sh@339 -- # ver1_l=2 00:04:25.074 22:04:21 -- scripts/common.sh@340 -- # ver2_l=1 00:04:25.074 22:04:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:25.074 22:04:21 -- scripts/common.sh@343 -- # case "$op" in 00:04:25.074 22:04:21 -- scripts/common.sh@344 -- # : 1 00:04:25.074 22:04:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:25.074 22:04:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.074 22:04:21 -- scripts/common.sh@364 -- # decimal 1 00:04:25.074 22:04:21 -- scripts/common.sh@352 -- # local d=1 00:04:25.074 22:04:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.074 22:04:21 -- scripts/common.sh@354 -- # echo 1 00:04:25.074 22:04:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:25.074 22:04:21 -- scripts/common.sh@365 -- # decimal 2 00:04:25.074 22:04:21 -- scripts/common.sh@352 -- # local d=2 00:04:25.074 22:04:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.074 22:04:21 -- scripts/common.sh@354 -- # echo 2 00:04:25.074 22:04:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:25.074 22:04:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:25.074 22:04:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:25.074 22:04:21 -- scripts/common.sh@367 -- # return 0 00:04:25.074 22:04:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.074 22:04:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.074 --rc genhtml_branch_coverage=1 00:04:25.074 --rc genhtml_function_coverage=1 00:04:25.074 --rc genhtml_legend=1 00:04:25.074 --rc geninfo_all_blocks=1 00:04:25.074 --rc geninfo_unexecuted_blocks=1 00:04:25.074 00:04:25.074 ' 00:04:25.074 22:04:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.074 --rc genhtml_branch_coverage=1 00:04:25.074 --rc genhtml_function_coverage=1 00:04:25.074 --rc genhtml_legend=1 00:04:25.074 --rc geninfo_all_blocks=1 00:04:25.074 --rc geninfo_unexecuted_blocks=1 00:04:25.074 00:04:25.074 ' 00:04:25.074 22:04:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.074 --rc genhtml_branch_coverage=1 00:04:25.074 --rc genhtml_function_coverage=1 00:04:25.074 --rc genhtml_legend=1 00:04:25.074 --rc geninfo_all_blocks=1 00:04:25.074 --rc geninfo_unexecuted_blocks=1 00:04:25.074 00:04:25.074 ' 00:04:25.074 22:04:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:25.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.074 --rc genhtml_branch_coverage=1 00:04:25.074 --rc genhtml_function_coverage=1 00:04:25.074 --rc genhtml_legend=1 00:04:25.074 --rc geninfo_all_blocks=1 00:04:25.074 --rc geninfo_unexecuted_blocks=1 00:04:25.074 00:04:25.074 ' 00:04:25.074 22:04:21 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:25.074 22:04:21 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56386 00:04:25.074 22:04:21 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56386 00:04:25.074 22:04:21 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.074 22:04:21 -- common/autotest_common.sh@829 -- # '[' -z 56386 ']' 00:04:25.074 22:04:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.074 22:04:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.074 22:04:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.074 22:04:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.074 22:04:21 -- common/autotest_common.sh@10 -- # set +x 00:04:25.074 [2024-11-17 22:04:21.590440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:25.074 [2024-11-17 22:04:21.590550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56386 ] 00:04:25.333 [2024-11-17 22:04:21.721275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.333 [2024-11-17 22:04:21.810063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:25.333 [2024-11-17 22:04:21.810229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.269 22:04:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.269 22:04:22 -- common/autotest_common.sh@862 -- # return 0 00:04:26.269 22:04:22 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:26.269 22:04:22 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56386 00:04:26.269 22:04:22 -- common/autotest_common.sh@936 -- # '[' -z 56386 ']' 00:04:26.269 22:04:22 -- common/autotest_common.sh@940 -- # kill -0 56386 00:04:26.269 22:04:22 -- common/autotest_common.sh@941 -- # uname 00:04:26.269 22:04:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:26.269 22:04:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56386 00:04:26.528 22:04:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:26.528 22:04:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:26.528 killing process with pid 56386 00:04:26.528 22:04:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56386' 00:04:26.528 22:04:22 -- common/autotest_common.sh@955 -- # kill 56386 00:04:26.528 22:04:22 -- common/autotest_common.sh@960 -- # wait 56386 00:04:27.097 00:04:27.097 real 0m2.111s 00:04:27.097 user 0m2.282s 00:04:27.097 sys 0m0.547s 00:04:27.097 22:04:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:27.097 22:04:23 -- common/autotest_common.sh@10 -- # set +x 00:04:27.097 ************************************ 00:04:27.097 END TEST alias_rpc 00:04:27.097 ************************************ 00:04:27.097 22:04:23 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:04:27.098 22:04:23 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.098 22:04:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.098 22:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.098 22:04:23 -- common/autotest_common.sh@10 -- # set +x 00:04:27.098 ************************************ 00:04:27.098 START TEST dpdk_mem_utility 00:04:27.098 ************************************ 00:04:27.098 22:04:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.098 * Looking for test storage... 00:04:27.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:27.098 22:04:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:27.098 22:04:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:27.098 22:04:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:27.098 22:04:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:27.098 22:04:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:27.098 22:04:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:27.098 22:04:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:27.098 22:04:23 -- scripts/common.sh@335 -- # IFS=.-: 00:04:27.098 22:04:23 -- scripts/common.sh@335 -- # read -ra ver1 00:04:27.098 22:04:23 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.098 22:04:23 -- scripts/common.sh@336 -- # read -ra ver2 00:04:27.098 22:04:23 -- scripts/common.sh@337 -- # local 'op=<' 00:04:27.098 22:04:23 -- scripts/common.sh@339 -- # ver1_l=2 00:04:27.098 22:04:23 -- scripts/common.sh@340 -- # ver2_l=1 00:04:27.098 22:04:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:27.098 22:04:23 -- scripts/common.sh@343 -- # case "$op" in 00:04:27.098 22:04:23 -- scripts/common.sh@344 -- # : 1 00:04:27.098 22:04:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:27.098 22:04:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.098 22:04:23 -- scripts/common.sh@364 -- # decimal 1 00:04:27.098 22:04:23 -- scripts/common.sh@352 -- # local d=1 00:04:27.098 22:04:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.098 22:04:23 -- scripts/common.sh@354 -- # echo 1 00:04:27.098 22:04:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:27.098 22:04:23 -- scripts/common.sh@365 -- # decimal 2 00:04:27.098 22:04:23 -- scripts/common.sh@352 -- # local d=2 00:04:27.098 22:04:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.098 22:04:23 -- scripts/common.sh@354 -- # echo 2 00:04:27.098 22:04:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:27.098 22:04:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:27.098 22:04:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:27.098 22:04:23 -- scripts/common.sh@367 -- # return 0 00:04:27.098 22:04:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.098 22:04:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:27.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.098 --rc genhtml_branch_coverage=1 00:04:27.098 --rc genhtml_function_coverage=1 00:04:27.098 --rc genhtml_legend=1 00:04:27.098 --rc geninfo_all_blocks=1 00:04:27.098 --rc geninfo_unexecuted_blocks=1 00:04:27.098 00:04:27.098 ' 00:04:27.098 22:04:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:27.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.098 --rc genhtml_branch_coverage=1 00:04:27.098 --rc genhtml_function_coverage=1 00:04:27.098 --rc genhtml_legend=1 00:04:27.098 --rc geninfo_all_blocks=1 00:04:27.098 --rc geninfo_unexecuted_blocks=1 00:04:27.098 00:04:27.098 ' 00:04:27.098 22:04:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:27.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.098 --rc genhtml_branch_coverage=1 00:04:27.098 --rc genhtml_function_coverage=1 00:04:27.098 --rc genhtml_legend=1 00:04:27.098 --rc geninfo_all_blocks=1 00:04:27.098 --rc geninfo_unexecuted_blocks=1 00:04:27.098 00:04:27.098 ' 00:04:27.098 22:04:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:27.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.098 --rc genhtml_branch_coverage=1 00:04:27.098 --rc genhtml_function_coverage=1 00:04:27.098 --rc genhtml_legend=1 00:04:27.098 --rc geninfo_all_blocks=1 00:04:27.098 --rc geninfo_unexecuted_blocks=1 00:04:27.098 00:04:27.098 ' 00:04:27.098 22:04:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:27.098 22:04:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56485 00:04:27.098 22:04:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56485 00:04:27.098 22:04:23 -- common/autotest_common.sh@829 -- # '[' -z 56485 ']' 00:04:27.098 22:04:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.098 22:04:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.098 22:04:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.098 22:04:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.098 22:04:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.098 22:04:23 -- common/autotest_common.sh@10 -- # set +x 00:04:27.357 [2024-11-17 22:04:23.757257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:27.357 [2024-11-17 22:04:23.757350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56485 ] 00:04:27.357 [2024-11-17 22:04:23.892274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.616 [2024-11-17 22:04:24.038960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:27.616 [2024-11-17 22:04:24.039140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.183 22:04:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.183 22:04:24 -- common/autotest_common.sh@862 -- # return 0 00:04:28.183 22:04:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.183 22:04:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.183 22:04:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.183 22:04:24 -- common/autotest_common.sh@10 -- # set +x 00:04:28.183 { 00:04:28.183 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.183 } 00:04:28.183 22:04:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.183 22:04:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:28.183 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:28.183 1 heaps totaling size 814.000000 MiB 00:04:28.183 size: 814.000000 MiB heap id: 0 00:04:28.183 end heaps---------- 00:04:28.183 8 mempools totaling size 598.116089 MiB 00:04:28.183 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:28.183 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:28.183 size: 84.521057 MiB name: bdev_io_56485 00:04:28.183 size: 51.011292 MiB name: evtpool_56485 00:04:28.183 size: 50.003479 MiB name: msgpool_56485 00:04:28.183 size: 21.763794 MiB name: PDU_Pool 00:04:28.183 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:28.183 size: 0.026123 MiB name: Session_Pool 00:04:28.183 end mempools------- 00:04:28.183 6 memzones totaling size 4.142822 MiB 00:04:28.183 size: 1.000366 MiB name: RG_ring_0_56485 00:04:28.183 size: 1.000366 MiB name: RG_ring_1_56485 00:04:28.183 size: 1.000366 MiB name: RG_ring_4_56485 00:04:28.183 size: 1.000366 MiB name: RG_ring_5_56485 00:04:28.183 size: 0.125366 MiB name: RG_ring_2_56485 00:04:28.183 size: 0.015991 MiB name: RG_ring_3_56485 00:04:28.183 end memzones------- 00:04:28.183 22:04:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:28.443 heap id: 0 total size: 814.000000 MiB number of busy elements: 208 number of free elements: 15 00:04:28.443 list of free elements. size: 12.488770 MiB 00:04:28.443 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:28.443 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:28.443 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:28.443 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:28.443 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:28.443 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:28.443 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:28.443 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:28.443 element at address: 0x200000200000 with size: 0.837219 MiB 00:04:28.443 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:04:28.443 element at address: 0x20000b200000 with size: 0.489990 MiB 00:04:28.443 element at address: 0x200000800000 with size: 0.487061 MiB 00:04:28.444 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:28.444 element at address: 0x200027e00000 with size: 0.399414 MiB 00:04:28.444 element at address: 0x200003a00000 with size: 0.351685 MiB 00:04:28.444 list of standard malloc elements. size: 199.248657 MiB 00:04:28.444 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:28.444 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:28.444 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:28.444 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:28.444 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:28.444 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:28.444 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:28.444 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:28.444 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:28.444 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:28.444 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:28.445 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e66400 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e664c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d0c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:28.445 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:28.445 list of memzone associated elements. size: 602.262573 MiB 00:04:28.445 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:28.445 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:28.445 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:28.445 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:28.445 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:28.445 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56485_0 00:04:28.445 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:28.445 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56485_0 00:04:28.445 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:28.445 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56485_0 00:04:28.445 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:28.445 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:28.445 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:28.445 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:28.445 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:28.445 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56485 00:04:28.445 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:28.445 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56485 00:04:28.445 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:28.445 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56485 00:04:28.445 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:28.445 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:28.445 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:28.445 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:28.445 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:28.445 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:28.445 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:28.445 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:28.445 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:28.445 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56485 00:04:28.445 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:28.445 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56485 00:04:28.445 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:28.445 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56485 00:04:28.445 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:28.445 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56485 00:04:28.445 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:28.445 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56485 00:04:28.445 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:28.445 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:28.445 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:28.445 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:28.445 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:28.445 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:28.445 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:28.445 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56485 00:04:28.445 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:28.445 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:28.445 element at address: 0x200027e66580 with size: 0.023743 MiB 00:04:28.445 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:28.445 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:28.445 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56485 00:04:28.445 element at address: 0x200027e6c6c0 with size: 0.002441 MiB 00:04:28.445 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:28.445 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:28.445 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56485 00:04:28.445 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:28.446 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56485 00:04:28.446 element at address: 0x200027e6d180 with size: 0.000305 MiB 00:04:28.446 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:28.446 22:04:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:28.446 22:04:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56485 00:04:28.446 22:04:24 -- common/autotest_common.sh@936 -- # '[' -z 56485 ']' 00:04:28.446 22:04:24 -- common/autotest_common.sh@940 -- # kill -0 56485 00:04:28.446 22:04:24 -- common/autotest_common.sh@941 -- # uname 00:04:28.446 22:04:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:28.446 22:04:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56485 00:04:28.446 22:04:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:28.446 killing process with pid 56485 00:04:28.446 22:04:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:28.446 22:04:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56485' 00:04:28.446 22:04:24 -- common/autotest_common.sh@955 -- # kill 56485 00:04:28.446 22:04:24 -- common/autotest_common.sh@960 -- # wait 56485 00:04:29.013 00:04:29.013 real 0m1.914s 00:04:29.013 user 0m1.966s 00:04:29.013 sys 0m0.508s 00:04:29.013 22:04:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.013 ************************************ 00:04:29.013 END TEST dpdk_mem_utility 00:04:29.013 ************************************ 00:04:29.013 22:04:25 -- common/autotest_common.sh@10 -- # set +x 00:04:29.013 22:04:25 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.013 22:04:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.013 22:04:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.013 22:04:25 -- common/autotest_common.sh@10 -- # set +x 00:04:29.013 ************************************ 00:04:29.013 START TEST event 00:04:29.013 ************************************ 00:04:29.013 22:04:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.013 * Looking for test storage... 00:04:29.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:29.014 22:04:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:29.014 22:04:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:29.014 22:04:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:29.272 22:04:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:29.272 22:04:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:29.272 22:04:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:29.272 22:04:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:29.272 22:04:25 -- scripts/common.sh@335 -- # IFS=.-: 00:04:29.272 22:04:25 -- scripts/common.sh@335 -- # read -ra ver1 00:04:29.272 22:04:25 -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.272 22:04:25 -- scripts/common.sh@336 -- # read -ra ver2 00:04:29.272 22:04:25 -- scripts/common.sh@337 -- # local 'op=<' 00:04:29.272 22:04:25 -- scripts/common.sh@339 -- # ver1_l=2 00:04:29.272 22:04:25 -- scripts/common.sh@340 -- # ver2_l=1 00:04:29.273 22:04:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:29.273 22:04:25 -- scripts/common.sh@343 -- # case "$op" in 00:04:29.273 22:04:25 -- scripts/common.sh@344 -- # : 1 00:04:29.273 22:04:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:29.273 22:04:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.273 22:04:25 -- scripts/common.sh@364 -- # decimal 1 00:04:29.273 22:04:25 -- scripts/common.sh@352 -- # local d=1 00:04:29.273 22:04:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.273 22:04:25 -- scripts/common.sh@354 -- # echo 1 00:04:29.273 22:04:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:29.273 22:04:25 -- scripts/common.sh@365 -- # decimal 2 00:04:29.273 22:04:25 -- scripts/common.sh@352 -- # local d=2 00:04:29.273 22:04:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.273 22:04:25 -- scripts/common.sh@354 -- # echo 2 00:04:29.273 22:04:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:29.273 22:04:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:29.273 22:04:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:29.273 22:04:25 -- scripts/common.sh@367 -- # return 0 00:04:29.273 22:04:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.273 22:04:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:29.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.273 --rc genhtml_branch_coverage=1 00:04:29.273 --rc genhtml_function_coverage=1 00:04:29.273 --rc genhtml_legend=1 00:04:29.273 --rc geninfo_all_blocks=1 00:04:29.273 --rc geninfo_unexecuted_blocks=1 00:04:29.273 00:04:29.273 ' 00:04:29.273 22:04:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:29.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.273 --rc genhtml_branch_coverage=1 00:04:29.273 --rc genhtml_function_coverage=1 00:04:29.273 --rc genhtml_legend=1 00:04:29.273 --rc geninfo_all_blocks=1 00:04:29.273 --rc geninfo_unexecuted_blocks=1 00:04:29.273 00:04:29.273 ' 00:04:29.273 22:04:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:29.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.273 --rc genhtml_branch_coverage=1 00:04:29.273 --rc genhtml_function_coverage=1 00:04:29.273 --rc genhtml_legend=1 00:04:29.273 --rc geninfo_all_blocks=1 00:04:29.273 --rc geninfo_unexecuted_blocks=1 00:04:29.273 00:04:29.273 ' 00:04:29.273 22:04:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:29.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.273 --rc genhtml_branch_coverage=1 00:04:29.273 --rc genhtml_function_coverage=1 00:04:29.273 --rc genhtml_legend=1 00:04:29.273 --rc geninfo_all_blocks=1 00:04:29.273 --rc geninfo_unexecuted_blocks=1 00:04:29.273 00:04:29.273 ' 00:04:29.273 22:04:25 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:29.273 22:04:25 -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.273 22:04:25 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.273 22:04:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:29.273 22:04:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.273 22:04:25 -- common/autotest_common.sh@10 -- # set +x 00:04:29.273 ************************************ 00:04:29.273 START TEST event_perf 00:04:29.273 ************************************ 00:04:29.273 22:04:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.273 Running I/O for 1 seconds...[2024-11-17 22:04:25.681298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:29.273 [2024-11-17 22:04:25.681524] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56586 ] 00:04:29.273 [2024-11-17 22:04:25.820785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.532 [2024-11-17 22:04:25.936119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.532 [2024-11-17 22:04:25.936261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.532 Running I/O for 1 seconds...[2024-11-17 22:04:25.936407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.532 [2024-11-17 22:04:25.936415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.468 00:04:30.468 lcore 0: 126449 00:04:30.468 lcore 1: 126451 00:04:30.468 lcore 2: 126452 00:04:30.468 lcore 3: 126449 00:04:30.468 done. 00:04:30.468 00:04:30.468 real 0m1.411s 00:04:30.468 user 0m4.213s 00:04:30.468 sys 0m0.079s 00:04:30.468 22:04:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.468 22:04:27 -- common/autotest_common.sh@10 -- # set +x 00:04:30.468 ************************************ 00:04:30.468 END TEST event_perf 00:04:30.468 ************************************ 00:04:30.727 22:04:27 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:30.727 22:04:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:30.727 22:04:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.727 22:04:27 -- common/autotest_common.sh@10 -- # set +x 00:04:30.727 ************************************ 00:04:30.727 START TEST event_reactor 00:04:30.727 ************************************ 00:04:30.727 22:04:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:30.727 [2024-11-17 22:04:27.136233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:30.727 [2024-11-17 22:04:27.136641] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56630 ] 00:04:30.727 [2024-11-17 22:04:27.274829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.986 [2024-11-17 22:04:27.356471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.923 test_start 00:04:31.923 oneshot 00:04:31.923 tick 100 00:04:31.923 tick 100 00:04:31.923 tick 250 00:04:31.923 tick 100 00:04:31.923 tick 100 00:04:31.923 tick 100 00:04:31.923 tick 250 00:04:31.923 tick 500 00:04:31.923 tick 100 00:04:31.923 tick 100 00:04:31.923 tick 250 00:04:31.923 tick 100 00:04:31.923 tick 100 00:04:31.923 test_end 00:04:31.923 00:04:31.923 real 0m1.356s 00:04:31.923 user 0m1.195s 00:04:31.923 sys 0m0.056s 00:04:31.923 22:04:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:31.923 22:04:28 -- common/autotest_common.sh@10 -- # set +x 00:04:31.923 ************************************ 00:04:31.923 END TEST event_reactor 00:04:31.923 ************************************ 00:04:31.923 22:04:28 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.923 22:04:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:31.923 22:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.923 22:04:28 -- common/autotest_common.sh@10 -- # set +x 00:04:31.923 ************************************ 00:04:31.923 START TEST event_reactor_perf 00:04:31.923 ************************************ 00:04:31.923 22:04:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.182 [2024-11-17 22:04:28.545340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:32.182 [2024-11-17 22:04:28.545436] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56660 ] 00:04:32.182 [2024-11-17 22:04:28.682178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.182 [2024-11-17 22:04:28.763661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.580 test_start 00:04:33.580 test_end 00:04:33.580 Performance: 458584 events per second 00:04:33.580 ************************************ 00:04:33.580 END TEST event_reactor_perf 00:04:33.580 ************************************ 00:04:33.580 00:04:33.580 real 0m1.362s 00:04:33.580 user 0m1.187s 00:04:33.580 sys 0m0.069s 00:04:33.580 22:04:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.580 22:04:29 -- common/autotest_common.sh@10 -- # set +x 00:04:33.580 22:04:29 -- event/event.sh@49 -- # uname -s 00:04:33.580 22:04:29 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.580 22:04:29 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.580 22:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.580 22:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.580 22:04:29 -- common/autotest_common.sh@10 -- # set +x 00:04:33.580 ************************************ 00:04:33.580 START TEST event_scheduler 00:04:33.580 ************************************ 00:04:33.580 22:04:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.580 * Looking for test storage... 00:04:33.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:33.580 22:04:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:33.580 22:04:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:33.580 22:04:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:33.580 22:04:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:33.580 22:04:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:33.580 22:04:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:33.580 22:04:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:33.580 22:04:30 -- scripts/common.sh@335 -- # IFS=.-: 00:04:33.580 22:04:30 -- scripts/common.sh@335 -- # read -ra ver1 00:04:33.580 22:04:30 -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.580 22:04:30 -- scripts/common.sh@336 -- # read -ra ver2 00:04:33.580 22:04:30 -- scripts/common.sh@337 -- # local 'op=<' 00:04:33.580 22:04:30 -- scripts/common.sh@339 -- # ver1_l=2 00:04:33.580 22:04:30 -- scripts/common.sh@340 -- # ver2_l=1 00:04:33.580 22:04:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:33.580 22:04:30 -- scripts/common.sh@343 -- # case "$op" in 00:04:33.580 22:04:30 -- scripts/common.sh@344 -- # : 1 00:04:33.580 22:04:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:33.580 22:04:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.580 22:04:30 -- scripts/common.sh@364 -- # decimal 1 00:04:33.580 22:04:30 -- scripts/common.sh@352 -- # local d=1 00:04:33.580 22:04:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.580 22:04:30 -- scripts/common.sh@354 -- # echo 1 00:04:33.580 22:04:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:33.580 22:04:30 -- scripts/common.sh@365 -- # decimal 2 00:04:33.580 22:04:30 -- scripts/common.sh@352 -- # local d=2 00:04:33.580 22:04:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.580 22:04:30 -- scripts/common.sh@354 -- # echo 2 00:04:33.580 22:04:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:33.580 22:04:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:33.580 22:04:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:33.580 22:04:30 -- scripts/common.sh@367 -- # return 0 00:04:33.580 22:04:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.580 22:04:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:33.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.580 --rc genhtml_branch_coverage=1 00:04:33.580 --rc genhtml_function_coverage=1 00:04:33.580 --rc genhtml_legend=1 00:04:33.580 --rc geninfo_all_blocks=1 00:04:33.580 --rc geninfo_unexecuted_blocks=1 00:04:33.580 00:04:33.580 ' 00:04:33.580 22:04:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:33.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.580 --rc genhtml_branch_coverage=1 00:04:33.580 --rc genhtml_function_coverage=1 00:04:33.580 --rc genhtml_legend=1 00:04:33.580 --rc geninfo_all_blocks=1 00:04:33.580 --rc geninfo_unexecuted_blocks=1 00:04:33.580 00:04:33.580 ' 00:04:33.580 22:04:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:33.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.580 --rc genhtml_branch_coverage=1 00:04:33.580 --rc genhtml_function_coverage=1 00:04:33.580 --rc genhtml_legend=1 00:04:33.580 --rc geninfo_all_blocks=1 00:04:33.580 --rc geninfo_unexecuted_blocks=1 00:04:33.580 00:04:33.580 ' 00:04:33.580 22:04:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:33.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.580 --rc genhtml_branch_coverage=1 00:04:33.580 --rc genhtml_function_coverage=1 00:04:33.580 --rc genhtml_legend=1 00:04:33.580 --rc geninfo_all_blocks=1 00:04:33.580 --rc geninfo_unexecuted_blocks=1 00:04:33.580 00:04:33.580 ' 00:04:33.580 22:04:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.580 22:04:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56734 00:04:33.580 22:04:30 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.580 22:04:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.580 22:04:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 56734 00:04:33.580 22:04:30 -- common/autotest_common.sh@829 -- # '[' -z 56734 ']' 00:04:33.580 22:04:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.580 22:04:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.580 22:04:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.580 22:04:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.580 22:04:30 -- common/autotest_common.sh@10 -- # set +x 00:04:33.580 [2024-11-17 22:04:30.151202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:33.580 [2024-11-17 22:04:30.151446] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56734 ] 00:04:33.839 [2024-11-17 22:04:30.291901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.839 [2024-11-17 22:04:30.416440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.839 [2024-11-17 22:04:30.416585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.839 [2024-11-17 22:04:30.416708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.839 [2024-11-17 22:04:30.417089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.776 22:04:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.776 22:04:31 -- common/autotest_common.sh@862 -- # return 0 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 POWER: Env isn't set yet! 00:04:34.776 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:34.776 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.776 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.776 POWER: Attempting to initialise PSTAT power management... 00:04:34.776 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.776 POWER: Cannot set governor of lcore 0 to performance 00:04:34.776 POWER: Attempting to initialise AMD PSTATE power management... 00:04:34.776 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.776 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.776 POWER: Attempting to initialise CPPC power management... 00:04:34.776 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.776 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.776 POWER: Attempting to initialise VM power management... 00:04:34.776 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:34.776 POWER: Unable to set Power Management Environment for lcore 0 00:04:34.776 [2024-11-17 22:04:31.214594] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:34.776 [2024-11-17 22:04:31.214608] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:34.776 [2024-11-17 22:04:31.214615] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:34.776 [2024-11-17 22:04:31.214628] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:34.776 [2024-11-17 22:04:31.214634] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:34.776 [2024-11-17 22:04:31.214640] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 [2024-11-17 22:04:31.300924] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:34.776 22:04:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.776 22:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 ************************************ 00:04:34.776 START TEST scheduler_create_thread 00:04:34.776 ************************************ 00:04:34.776 22:04:31 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 2 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 3 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 4 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 5 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 6 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 7 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 8 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.776 22:04:31 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:34.776 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.776 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:34.776 9 00:04:34.776 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.777 22:04:31 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:34.777 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.777 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:35.035 10 00:04:35.035 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.035 22:04:31 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:35.035 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.035 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:35.035 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.035 22:04:31 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:35.035 22:04:31 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:35.035 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.035 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:35.035 22:04:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.035 22:04:31 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:35.035 22:04:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.035 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:04:36.412 22:04:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.412 22:04:32 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:36.412 22:04:32 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:36.412 22:04:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.412 22:04:32 -- common/autotest_common.sh@10 -- # set +x 00:04:37.349 ************************************ 00:04:37.349 END TEST scheduler_create_thread 00:04:37.349 ************************************ 00:04:37.349 22:04:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.349 00:04:37.349 real 0m2.611s 00:04:37.349 user 0m0.018s 00:04:37.349 sys 0m0.006s 00:04:37.349 22:04:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.349 22:04:33 -- common/autotest_common.sh@10 -- # set +x 00:04:37.608 22:04:33 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:37.608 22:04:33 -- scheduler/scheduler.sh@46 -- # killprocess 56734 00:04:37.608 22:04:33 -- common/autotest_common.sh@936 -- # '[' -z 56734 ']' 00:04:37.608 22:04:33 -- common/autotest_common.sh@940 -- # kill -0 56734 00:04:37.608 22:04:33 -- common/autotest_common.sh@941 -- # uname 00:04:37.608 22:04:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:37.608 22:04:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56734 00:04:37.608 22:04:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:37.608 22:04:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:37.608 killing process with pid 56734 00:04:37.608 22:04:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56734' 00:04:37.608 22:04:34 -- common/autotest_common.sh@955 -- # kill 56734 00:04:37.608 22:04:34 -- common/autotest_common.sh@960 -- # wait 56734 00:04:37.866 [2024-11-17 22:04:34.405524] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:38.125 00:04:38.125 real 0m4.715s 00:04:38.125 user 0m9.042s 00:04:38.125 sys 0m0.366s 00:04:38.125 22:04:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.125 ************************************ 00:04:38.125 END TEST event_scheduler 00:04:38.125 ************************************ 00:04:38.125 22:04:34 -- common/autotest_common.sh@10 -- # set +x 00:04:38.125 22:04:34 -- event/event.sh@51 -- # modprobe -n nbd 00:04:38.125 22:04:34 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:38.125 22:04:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.125 22:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.125 22:04:34 -- common/autotest_common.sh@10 -- # set +x 00:04:38.125 ************************************ 00:04:38.125 START TEST app_repeat 00:04:38.125 ************************************ 00:04:38.125 22:04:34 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:04:38.125 22:04:34 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.125 22:04:34 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.125 22:04:34 -- event/event.sh@13 -- # local nbd_list 00:04:38.125 22:04:34 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.125 22:04:34 -- event/event.sh@14 -- # local bdev_list 00:04:38.125 22:04:34 -- event/event.sh@15 -- # local repeat_times=4 00:04:38.125 22:04:34 -- event/event.sh@17 -- # modprobe nbd 00:04:38.125 Process app_repeat pid: 56846 00:04:38.125 22:04:34 -- event/event.sh@19 -- # repeat_pid=56846 00:04:38.125 22:04:34 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:38.125 22:04:34 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.125 22:04:34 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 56846' 00:04:38.125 22:04:34 -- event/event.sh@23 -- # for i in {0..2} 00:04:38.125 spdk_app_start Round 0 00:04:38.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.125 22:04:34 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:38.125 22:04:34 -- event/event.sh@25 -- # waitforlisten 56846 /var/tmp/spdk-nbd.sock 00:04:38.125 22:04:34 -- common/autotest_common.sh@829 -- # '[' -z 56846 ']' 00:04:38.125 22:04:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.125 22:04:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.125 22:04:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.125 22:04:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.125 22:04:34 -- common/autotest_common.sh@10 -- # set +x 00:04:38.385 [2024-11-17 22:04:34.754817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:38.385 [2024-11-17 22:04:34.755125] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56846 ] 00:04:38.385 [2024-11-17 22:04:34.891941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.385 [2024-11-17 22:04:34.982518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.385 [2024-11-17 22:04:34.982525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.322 22:04:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.322 22:04:35 -- common/autotest_common.sh@862 -- # return 0 00:04:39.322 22:04:35 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.322 Malloc0 00:04:39.322 22:04:35 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.581 Malloc1 00:04:39.581 22:04:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@12 -- # local i 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.581 22:04:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.839 /dev/nbd0 00:04:39.839 22:04:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.839 22:04:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.839 22:04:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:39.839 22:04:36 -- common/autotest_common.sh@867 -- # local i 00:04:39.839 22:04:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:39.839 22:04:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:39.839 22:04:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:39.839 22:04:36 -- common/autotest_common.sh@871 -- # break 00:04:39.839 22:04:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:39.839 22:04:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:39.839 22:04:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.839 1+0 records in 00:04:39.839 1+0 records out 00:04:39.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343759 s, 11.9 MB/s 00:04:39.839 22:04:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.839 22:04:36 -- common/autotest_common.sh@884 -- # size=4096 00:04:39.839 22:04:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.839 22:04:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:39.839 22:04:36 -- common/autotest_common.sh@887 -- # return 0 00:04:39.839 22:04:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.839 22:04:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.839 22:04:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.407 /dev/nbd1 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.407 22:04:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:40.407 22:04:36 -- common/autotest_common.sh@867 -- # local i 00:04:40.407 22:04:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:40.407 22:04:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:40.407 22:04:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:40.407 22:04:36 -- common/autotest_common.sh@871 -- # break 00:04:40.407 22:04:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:40.407 22:04:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:40.407 22:04:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.407 1+0 records in 00:04:40.407 1+0 records out 00:04:40.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035608 s, 11.5 MB/s 00:04:40.407 22:04:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.407 22:04:36 -- common/autotest_common.sh@884 -- # size=4096 00:04:40.407 22:04:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.407 22:04:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:40.407 22:04:36 -- common/autotest_common.sh@887 -- # return 0 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.407 { 00:04:40.407 "bdev_name": "Malloc0", 00:04:40.407 "nbd_device": "/dev/nbd0" 00:04:40.407 }, 00:04:40.407 { 00:04:40.407 "bdev_name": "Malloc1", 00:04:40.407 "nbd_device": "/dev/nbd1" 00:04:40.407 } 00:04:40.407 ]' 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.407 { 00:04:40.407 "bdev_name": "Malloc0", 00:04:40.407 "nbd_device": "/dev/nbd0" 00:04:40.407 }, 00:04:40.407 { 00:04:40.407 "bdev_name": "Malloc1", 00:04:40.407 "nbd_device": "/dev/nbd1" 00:04:40.407 } 00:04:40.407 ]' 00:04:40.407 22:04:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.407 22:04:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.407 /dev/nbd1' 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.666 /dev/nbd1' 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.666 256+0 records in 00:04:40.666 256+0 records out 00:04:40.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105437 s, 99.5 MB/s 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.666 256+0 records in 00:04:40.666 256+0 records out 00:04:40.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217892 s, 48.1 MB/s 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.666 256+0 records in 00:04:40.666 256+0 records out 00:04:40.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314974 s, 33.3 MB/s 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@51 -- # local i 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.666 22:04:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@41 -- # break 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.926 22:04:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@41 -- # break 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.184 22:04:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.442 22:04:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@65 -- # true 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.443 22:04:37 -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.443 22:04:37 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.701 22:04:38 -- event/event.sh@35 -- # sleep 3 00:04:42.269 [2024-11-17 22:04:38.605502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.269 [2024-11-17 22:04:38.674877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.269 [2024-11-17 22:04:38.674887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.269 [2024-11-17 22:04:38.745465] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.269 [2024-11-17 22:04:38.745536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.801 22:04:41 -- event/event.sh@23 -- # for i in {0..2} 00:04:44.801 spdk_app_start Round 1 00:04:44.801 22:04:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:44.801 22:04:41 -- event/event.sh@25 -- # waitforlisten 56846 /var/tmp/spdk-nbd.sock 00:04:44.801 22:04:41 -- common/autotest_common.sh@829 -- # '[' -z 56846 ']' 00:04:44.801 22:04:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.801 22:04:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.801 22:04:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.801 22:04:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.801 22:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:45.060 22:04:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.060 22:04:41 -- common/autotest_common.sh@862 -- # return 0 00:04:45.060 22:04:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.319 Malloc0 00:04:45.319 22:04:41 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.578 Malloc1 00:04:45.578 22:04:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@12 -- # local i 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.578 22:04:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.837 /dev/nbd0 00:04:45.837 22:04:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.837 22:04:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.837 22:04:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:45.837 22:04:42 -- common/autotest_common.sh@867 -- # local i 00:04:45.837 22:04:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:45.837 22:04:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:45.837 22:04:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:45.837 22:04:42 -- common/autotest_common.sh@871 -- # break 00:04:45.837 22:04:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:45.837 22:04:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:45.837 22:04:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.837 1+0 records in 00:04:45.837 1+0 records out 00:04:45.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257136 s, 15.9 MB/s 00:04:45.837 22:04:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:45.837 22:04:42 -- common/autotest_common.sh@884 -- # size=4096 00:04:45.837 22:04:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:45.837 22:04:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:45.837 22:04:42 -- common/autotest_common.sh@887 -- # return 0 00:04:45.837 22:04:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.837 22:04:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.837 22:04:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.404 /dev/nbd1 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.404 22:04:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:46.404 22:04:42 -- common/autotest_common.sh@867 -- # local i 00:04:46.404 22:04:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:46.404 22:04:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:46.404 22:04:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:46.404 22:04:42 -- common/autotest_common.sh@871 -- # break 00:04:46.404 22:04:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:46.404 22:04:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:46.404 22:04:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.404 1+0 records in 00:04:46.404 1+0 records out 00:04:46.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278091 s, 14.7 MB/s 00:04:46.404 22:04:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.404 22:04:42 -- common/autotest_common.sh@884 -- # size=4096 00:04:46.404 22:04:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.404 22:04:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:46.404 22:04:42 -- common/autotest_common.sh@887 -- # return 0 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.404 { 00:04:46.404 "bdev_name": "Malloc0", 00:04:46.404 "nbd_device": "/dev/nbd0" 00:04:46.404 }, 00:04:46.404 { 00:04:46.404 "bdev_name": "Malloc1", 00:04:46.404 "nbd_device": "/dev/nbd1" 00:04:46.404 } 00:04:46.404 ]' 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.404 { 00:04:46.404 "bdev_name": "Malloc0", 00:04:46.404 "nbd_device": "/dev/nbd0" 00:04:46.404 }, 00:04:46.404 { 00:04:46.404 "bdev_name": "Malloc1", 00:04:46.404 "nbd_device": "/dev/nbd1" 00:04:46.404 } 00:04:46.404 ]' 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.404 /dev/nbd1' 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.404 /dev/nbd1' 00:04:46.404 22:04:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.404 22:04:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.664 256+0 records in 00:04:46.664 256+0 records out 00:04:46.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00900984 s, 116 MB/s 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.664 256+0 records in 00:04:46.664 256+0 records out 00:04:46.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251018 s, 41.8 MB/s 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.664 256+0 records in 00:04:46.664 256+0 records out 00:04:46.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242716 s, 43.2 MB/s 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@51 -- # local i 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.664 22:04:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@41 -- # break 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.923 22:04:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@41 -- # break 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.182 22:04:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@65 -- # true 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.441 22:04:43 -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.441 22:04:43 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.755 22:04:44 -- event/event.sh@35 -- # sleep 3 00:04:48.016 [2024-11-17 22:04:44.567242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.275 [2024-11-17 22:04:44.642315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.275 [2024-11-17 22:04:44.642325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.275 [2024-11-17 22:04:44.713672] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.275 [2024-11-17 22:04:44.713750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.808 22:04:47 -- event/event.sh@23 -- # for i in {0..2} 00:04:50.808 spdk_app_start Round 2 00:04:50.808 22:04:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:50.808 22:04:47 -- event/event.sh@25 -- # waitforlisten 56846 /var/tmp/spdk-nbd.sock 00:04:50.808 22:04:47 -- common/autotest_common.sh@829 -- # '[' -z 56846 ']' 00:04:50.808 22:04:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.808 22:04:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.808 22:04:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.808 22:04:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.808 22:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:51.067 22:04:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.067 22:04:47 -- common/autotest_common.sh@862 -- # return 0 00:04:51.067 22:04:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.325 Malloc0 00:04:51.325 22:04:47 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.584 Malloc1 00:04:51.584 22:04:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@12 -- # local i 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.584 22:04:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.843 /dev/nbd0 00:04:51.843 22:04:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.843 22:04:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.843 22:04:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:51.843 22:04:48 -- common/autotest_common.sh@867 -- # local i 00:04:51.843 22:04:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:51.843 22:04:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:51.843 22:04:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:51.843 22:04:48 -- common/autotest_common.sh@871 -- # break 00:04:51.843 22:04:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:51.843 22:04:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:51.843 22:04:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.843 1+0 records in 00:04:51.843 1+0 records out 00:04:51.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019906 s, 20.6 MB/s 00:04:51.843 22:04:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.843 22:04:48 -- common/autotest_common.sh@884 -- # size=4096 00:04:51.843 22:04:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:51.843 22:04:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:51.843 22:04:48 -- common/autotest_common.sh@887 -- # return 0 00:04:51.843 22:04:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.843 22:04:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.843 22:04:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.102 /dev/nbd1 00:04:52.102 22:04:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.102 22:04:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.102 22:04:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:52.102 22:04:48 -- common/autotest_common.sh@867 -- # local i 00:04:52.102 22:04:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:52.102 22:04:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:52.102 22:04:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:52.102 22:04:48 -- common/autotest_common.sh@871 -- # break 00:04:52.102 22:04:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:52.102 22:04:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:52.102 22:04:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.102 1+0 records in 00:04:52.102 1+0 records out 00:04:52.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323154 s, 12.7 MB/s 00:04:52.102 22:04:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.102 22:04:48 -- common/autotest_common.sh@884 -- # size=4096 00:04:52.102 22:04:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.102 22:04:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:52.102 22:04:48 -- common/autotest_common.sh@887 -- # return 0 00:04:52.102 22:04:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.102 22:04:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.102 22:04:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.102 22:04:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.102 22:04:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.361 22:04:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.361 { 00:04:52.361 "bdev_name": "Malloc0", 00:04:52.361 "nbd_device": "/dev/nbd0" 00:04:52.361 }, 00:04:52.361 { 00:04:52.361 "bdev_name": "Malloc1", 00:04:52.361 "nbd_device": "/dev/nbd1" 00:04:52.361 } 00:04:52.361 ]' 00:04:52.361 22:04:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.361 { 00:04:52.361 "bdev_name": "Malloc0", 00:04:52.361 "nbd_device": "/dev/nbd0" 00:04:52.361 }, 00:04:52.361 { 00:04:52.361 "bdev_name": "Malloc1", 00:04:52.361 "nbd_device": "/dev/nbd1" 00:04:52.361 } 00:04:52.361 ]' 00:04:52.361 22:04:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.620 /dev/nbd1' 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.620 /dev/nbd1' 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.620 256+0 records in 00:04:52.620 256+0 records out 00:04:52.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692361 s, 151 MB/s 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.620 256+0 records in 00:04:52.620 256+0 records out 00:04:52.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248205 s, 42.2 MB/s 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.620 256+0 records in 00:04:52.620 256+0 records out 00:04:52.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026134 s, 40.1 MB/s 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@51 -- # local i 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.620 22:04:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@41 -- # break 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.879 22:04:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@41 -- # break 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.137 22:04:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@65 -- # true 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.396 22:04:49 -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.396 22:04:49 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.655 22:04:50 -- event/event.sh@35 -- # sleep 3 00:04:53.914 [2024-11-17 22:04:50.438815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.914 [2024-11-17 22:04:50.508381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.914 [2024-11-17 22:04:50.508392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.174 [2024-11-17 22:04:50.583293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.174 [2024-11-17 22:04:50.583359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.707 22:04:53 -- event/event.sh@38 -- # waitforlisten 56846 /var/tmp/spdk-nbd.sock 00:04:56.707 22:04:53 -- common/autotest_common.sh@829 -- # '[' -z 56846 ']' 00:04:56.707 22:04:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.707 22:04:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.707 22:04:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.707 22:04:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.707 22:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:56.966 22:04:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.966 22:04:53 -- common/autotest_common.sh@862 -- # return 0 00:04:56.966 22:04:53 -- event/event.sh@39 -- # killprocess 56846 00:04:56.966 22:04:53 -- common/autotest_common.sh@936 -- # '[' -z 56846 ']' 00:04:56.966 22:04:53 -- common/autotest_common.sh@940 -- # kill -0 56846 00:04:56.966 22:04:53 -- common/autotest_common.sh@941 -- # uname 00:04:56.966 22:04:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.966 22:04:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56846 00:04:56.966 22:04:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:56.966 22:04:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:56.966 killing process with pid 56846 00:04:56.966 22:04:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56846' 00:04:56.966 22:04:53 -- common/autotest_common.sh@955 -- # kill 56846 00:04:56.966 22:04:53 -- common/autotest_common.sh@960 -- # wait 56846 00:04:57.224 spdk_app_start is called in Round 0. 00:04:57.224 Shutdown signal received, stop current app iteration 00:04:57.224 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:57.224 spdk_app_start is called in Round 1. 00:04:57.224 Shutdown signal received, stop current app iteration 00:04:57.224 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:57.224 spdk_app_start is called in Round 2. 00:04:57.224 Shutdown signal received, stop current app iteration 00:04:57.224 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:57.224 spdk_app_start is called in Round 3. 00:04:57.224 Shutdown signal received, stop current app iteration 00:04:57.224 22:04:53 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:57.225 22:04:53 -- event/event.sh@42 -- # return 0 00:04:57.225 00:04:57.225 real 0m19.009s 00:04:57.225 user 0m42.228s 00:04:57.225 sys 0m3.054s 00:04:57.225 22:04:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.225 22:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:57.225 ************************************ 00:04:57.225 END TEST app_repeat 00:04:57.225 ************************************ 00:04:57.225 22:04:53 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:57.225 22:04:53 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:57.225 22:04:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.225 22:04:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.225 22:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:57.225 ************************************ 00:04:57.225 START TEST cpu_locks 00:04:57.225 ************************************ 00:04:57.225 22:04:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:57.484 * Looking for test storage... 00:04:57.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.484 22:04:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.484 22:04:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.484 22:04:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.484 22:04:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.484 22:04:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.484 22:04:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.484 22:04:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.484 22:04:53 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.484 22:04:53 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.484 22:04:53 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.484 22:04:53 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.484 22:04:53 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.484 22:04:53 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.484 22:04:53 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.484 22:04:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.484 22:04:53 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.484 22:04:53 -- scripts/common.sh@344 -- # : 1 00:04:57.484 22:04:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.484 22:04:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.484 22:04:53 -- scripts/common.sh@364 -- # decimal 1 00:04:57.484 22:04:53 -- scripts/common.sh@352 -- # local d=1 00:04:57.484 22:04:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.484 22:04:53 -- scripts/common.sh@354 -- # echo 1 00:04:57.484 22:04:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.484 22:04:53 -- scripts/common.sh@365 -- # decimal 2 00:04:57.484 22:04:53 -- scripts/common.sh@352 -- # local d=2 00:04:57.484 22:04:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.484 22:04:53 -- scripts/common.sh@354 -- # echo 2 00:04:57.484 22:04:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.484 22:04:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.484 22:04:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.484 22:04:53 -- scripts/common.sh@367 -- # return 0 00:04:57.484 22:04:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.484 22:04:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.484 --rc genhtml_branch_coverage=1 00:04:57.484 --rc genhtml_function_coverage=1 00:04:57.484 --rc genhtml_legend=1 00:04:57.484 --rc geninfo_all_blocks=1 00:04:57.484 --rc geninfo_unexecuted_blocks=1 00:04:57.484 00:04:57.484 ' 00:04:57.484 22:04:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.484 --rc genhtml_branch_coverage=1 00:04:57.484 --rc genhtml_function_coverage=1 00:04:57.484 --rc genhtml_legend=1 00:04:57.484 --rc geninfo_all_blocks=1 00:04:57.484 --rc geninfo_unexecuted_blocks=1 00:04:57.484 00:04:57.484 ' 00:04:57.484 22:04:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.484 --rc genhtml_branch_coverage=1 00:04:57.484 --rc genhtml_function_coverage=1 00:04:57.484 --rc genhtml_legend=1 00:04:57.484 --rc geninfo_all_blocks=1 00:04:57.484 --rc geninfo_unexecuted_blocks=1 00:04:57.484 00:04:57.484 ' 00:04:57.484 22:04:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.484 --rc genhtml_branch_coverage=1 00:04:57.484 --rc genhtml_function_coverage=1 00:04:57.484 --rc genhtml_legend=1 00:04:57.484 --rc geninfo_all_blocks=1 00:04:57.484 --rc geninfo_unexecuted_blocks=1 00:04:57.484 00:04:57.484 ' 00:04:57.484 22:04:53 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:57.484 22:04:53 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:57.484 22:04:53 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:57.484 22:04:53 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:57.484 22:04:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.484 22:04:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.484 22:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:57.484 ************************************ 00:04:57.484 START TEST default_locks 00:04:57.484 ************************************ 00:04:57.484 22:04:53 -- common/autotest_common.sh@1114 -- # default_locks 00:04:57.484 22:04:53 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57484 00:04:57.484 22:04:53 -- event/cpu_locks.sh@47 -- # waitforlisten 57484 00:04:57.484 22:04:53 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.484 22:04:53 -- common/autotest_common.sh@829 -- # '[' -z 57484 ']' 00:04:57.484 22:04:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.485 22:04:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.485 22:04:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.485 22:04:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.485 22:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:57.485 [2024-11-17 22:04:54.053293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.485 [2024-11-17 22:04:54.053392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57484 ] 00:04:57.743 [2024-11-17 22:04:54.188177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.743 [2024-11-17 22:04:54.270136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.743 [2024-11-17 22:04:54.270305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.679 22:04:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.679 22:04:54 -- common/autotest_common.sh@862 -- # return 0 00:04:58.679 22:04:54 -- event/cpu_locks.sh@49 -- # locks_exist 57484 00:04:58.679 22:04:54 -- event/cpu_locks.sh@22 -- # lslocks -p 57484 00:04:58.679 22:04:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.937 22:04:55 -- event/cpu_locks.sh@50 -- # killprocess 57484 00:04:58.938 22:04:55 -- common/autotest_common.sh@936 -- # '[' -z 57484 ']' 00:04:58.938 22:04:55 -- common/autotest_common.sh@940 -- # kill -0 57484 00:04:58.938 22:04:55 -- common/autotest_common.sh@941 -- # uname 00:04:58.938 22:04:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.938 22:04:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57484 00:04:58.938 22:04:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.938 killing process with pid 57484 00:04:58.938 22:04:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.938 22:04:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57484' 00:04:58.938 22:04:55 -- common/autotest_common.sh@955 -- # kill 57484 00:04:58.938 22:04:55 -- common/autotest_common.sh@960 -- # wait 57484 00:04:59.505 22:04:55 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57484 00:04:59.505 22:04:55 -- common/autotest_common.sh@650 -- # local es=0 00:04:59.505 22:04:55 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57484 00:04:59.505 22:04:55 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:59.505 22:04:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.505 22:04:55 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:59.505 22:04:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.505 22:04:55 -- common/autotest_common.sh@653 -- # waitforlisten 57484 00:04:59.505 22:04:55 -- common/autotest_common.sh@829 -- # '[' -z 57484 ']' 00:04:59.505 22:04:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.505 22:04:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.505 22:04:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.505 22:04:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.505 22:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:59.505 ERROR: process (pid: 57484) is no longer running 00:04:59.505 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57484) - No such process 00:04:59.505 22:04:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.505 22:04:55 -- common/autotest_common.sh@862 -- # return 1 00:04:59.505 22:04:55 -- common/autotest_common.sh@653 -- # es=1 00:04:59.505 22:04:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.505 22:04:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.505 22:04:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.505 22:04:55 -- event/cpu_locks.sh@54 -- # no_locks 00:04:59.505 22:04:55 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.505 22:04:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.505 22:04:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.505 00:04:59.505 real 0m1.960s 00:04:59.505 user 0m1.985s 00:04:59.505 sys 0m0.625s 00:04:59.505 22:04:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.505 22:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:59.505 ************************************ 00:04:59.505 END TEST default_locks 00:04:59.505 ************************************ 00:04:59.505 22:04:55 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:59.505 22:04:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.505 22:04:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.505 22:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:59.505 ************************************ 00:04:59.505 START TEST default_locks_via_rpc 00:04:59.505 ************************************ 00:04:59.505 22:04:56 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:04:59.505 22:04:56 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57543 00:04:59.505 22:04:56 -- event/cpu_locks.sh@63 -- # waitforlisten 57543 00:04:59.505 22:04:56 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.505 22:04:56 -- common/autotest_common.sh@829 -- # '[' -z 57543 ']' 00:04:59.505 22:04:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.505 22:04:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.505 22:04:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.505 22:04:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.505 22:04:56 -- common/autotest_common.sh@10 -- # set +x 00:04:59.505 [2024-11-17 22:04:56.072661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.505 [2024-11-17 22:04:56.072787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57543 ] 00:04:59.764 [2024-11-17 22:04:56.210427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.764 [2024-11-17 22:04:56.296868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.764 [2024-11-17 22:04:56.297034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.699 22:04:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.699 22:04:57 -- common/autotest_common.sh@862 -- # return 0 00:05:00.699 22:04:57 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:00.699 22:04:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.699 22:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:00.699 22:04:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.699 22:04:57 -- event/cpu_locks.sh@67 -- # no_locks 00:05:00.699 22:04:57 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.699 22:04:57 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.699 22:04:57 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.699 22:04:57 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.699 22:04:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.699 22:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:00.699 22:04:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.699 22:04:57 -- event/cpu_locks.sh@71 -- # locks_exist 57543 00:05:00.699 22:04:57 -- event/cpu_locks.sh@22 -- # lslocks -p 57543 00:05:00.699 22:04:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.957 22:04:57 -- event/cpu_locks.sh@73 -- # killprocess 57543 00:05:00.957 22:04:57 -- common/autotest_common.sh@936 -- # '[' -z 57543 ']' 00:05:00.957 22:04:57 -- common/autotest_common.sh@940 -- # kill -0 57543 00:05:00.957 22:04:57 -- common/autotest_common.sh@941 -- # uname 00:05:00.957 22:04:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:00.957 22:04:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57543 00:05:00.957 22:04:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:00.957 22:04:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:00.957 killing process with pid 57543 00:05:00.957 22:04:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57543' 00:05:00.957 22:04:57 -- common/autotest_common.sh@955 -- # kill 57543 00:05:00.957 22:04:57 -- common/autotest_common.sh@960 -- # wait 57543 00:05:01.525 00:05:01.525 real 0m1.928s 00:05:01.525 user 0m1.969s 00:05:01.525 sys 0m0.592s 00:05:01.525 22:04:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.525 22:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:01.525 ************************************ 00:05:01.525 END TEST default_locks_via_rpc 00:05:01.525 ************************************ 00:05:01.525 22:04:57 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:01.525 22:04:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.525 22:04:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.525 22:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:01.525 ************************************ 00:05:01.525 START TEST non_locking_app_on_locked_coremask 00:05:01.525 ************************************ 00:05:01.525 22:04:57 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:01.525 22:04:57 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57612 00:05:01.525 22:04:57 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.525 22:04:57 -- event/cpu_locks.sh@81 -- # waitforlisten 57612 /var/tmp/spdk.sock 00:05:01.525 22:04:57 -- common/autotest_common.sh@829 -- # '[' -z 57612 ']' 00:05:01.525 22:04:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.525 22:04:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.525 22:04:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.525 22:04:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.525 22:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:01.525 [2024-11-17 22:04:58.034460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.525 [2024-11-17 22:04:58.034538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57612 ] 00:05:01.784 [2024-11-17 22:04:58.161066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.784 [2024-11-17 22:04:58.243001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.784 [2024-11-17 22:04:58.243187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.719 22:04:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.719 22:04:59 -- common/autotest_common.sh@862 -- # return 0 00:05:02.719 22:04:59 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57640 00:05:02.719 22:04:59 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:02.719 22:04:59 -- event/cpu_locks.sh@85 -- # waitforlisten 57640 /var/tmp/spdk2.sock 00:05:02.719 22:04:59 -- common/autotest_common.sh@829 -- # '[' -z 57640 ']' 00:05:02.719 22:04:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.719 22:04:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.719 22:04:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.719 22:04:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.719 22:04:59 -- common/autotest_common.sh@10 -- # set +x 00:05:02.719 [2024-11-17 22:04:59.103084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.719 [2024-11-17 22:04:59.103188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57640 ] 00:05:02.719 [2024-11-17 22:04:59.241094] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.719 [2024-11-17 22:04:59.241141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.978 [2024-11-17 22:04:59.410703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.978 [2024-11-17 22:04:59.414907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.599 22:05:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.599 22:05:00 -- common/autotest_common.sh@862 -- # return 0 00:05:03.599 22:05:00 -- event/cpu_locks.sh@87 -- # locks_exist 57612 00:05:03.599 22:05:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.599 22:05:00 -- event/cpu_locks.sh@22 -- # lslocks -p 57612 00:05:04.535 22:05:00 -- event/cpu_locks.sh@89 -- # killprocess 57612 00:05:04.535 22:05:00 -- common/autotest_common.sh@936 -- # '[' -z 57612 ']' 00:05:04.535 22:05:00 -- common/autotest_common.sh@940 -- # kill -0 57612 00:05:04.535 22:05:00 -- common/autotest_common.sh@941 -- # uname 00:05:04.535 22:05:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.535 22:05:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57612 00:05:04.535 22:05:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.535 22:05:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.535 killing process with pid 57612 00:05:04.535 22:05:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57612' 00:05:04.535 22:05:00 -- common/autotest_common.sh@955 -- # kill 57612 00:05:04.535 22:05:00 -- common/autotest_common.sh@960 -- # wait 57612 00:05:05.471 22:05:01 -- event/cpu_locks.sh@90 -- # killprocess 57640 00:05:05.471 22:05:01 -- common/autotest_common.sh@936 -- # '[' -z 57640 ']' 00:05:05.471 22:05:01 -- common/autotest_common.sh@940 -- # kill -0 57640 00:05:05.471 22:05:01 -- common/autotest_common.sh@941 -- # uname 00:05:05.471 22:05:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.471 22:05:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57640 00:05:05.471 22:05:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.471 killing process with pid 57640 00:05:05.471 22:05:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.471 22:05:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57640' 00:05:05.471 22:05:02 -- common/autotest_common.sh@955 -- # kill 57640 00:05:05.471 22:05:02 -- common/autotest_common.sh@960 -- # wait 57640 00:05:06.040 00:05:06.040 real 0m4.615s 00:05:06.040 user 0m5.030s 00:05:06.040 sys 0m1.203s 00:05:06.040 22:05:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.040 22:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:06.040 ************************************ 00:05:06.040 END TEST non_locking_app_on_locked_coremask 00:05:06.040 ************************************ 00:05:06.040 22:05:02 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:06.040 22:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.040 22:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.040 22:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:06.299 ************************************ 00:05:06.299 START TEST locking_app_on_unlocked_coremask 00:05:06.299 ************************************ 00:05:06.299 22:05:02 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:06.299 22:05:02 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57730 00:05:06.299 22:05:02 -- event/cpu_locks.sh@99 -- # waitforlisten 57730 /var/tmp/spdk.sock 00:05:06.299 22:05:02 -- common/autotest_common.sh@829 -- # '[' -z 57730 ']' 00:05:06.299 22:05:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.299 22:05:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.299 22:05:02 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:06.299 22:05:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.299 22:05:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.299 22:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:06.299 [2024-11-17 22:05:02.725698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:06.299 [2024-11-17 22:05:02.725824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57730 ] 00:05:06.299 [2024-11-17 22:05:02.863632] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.299 [2024-11-17 22:05:02.863661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.557 [2024-11-17 22:05:02.963197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:06.557 [2024-11-17 22:05:02.963347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.122 22:05:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.122 22:05:03 -- common/autotest_common.sh@862 -- # return 0 00:05:07.122 22:05:03 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57758 00:05:07.122 22:05:03 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.122 22:05:03 -- event/cpu_locks.sh@103 -- # waitforlisten 57758 /var/tmp/spdk2.sock 00:05:07.122 22:05:03 -- common/autotest_common.sh@829 -- # '[' -z 57758 ']' 00:05:07.122 22:05:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.122 22:05:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.122 22:05:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.122 22:05:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.122 22:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:07.380 [2024-11-17 22:05:03.747918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.380 [2024-11-17 22:05:03.748010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57758 ] 00:05:07.380 [2024-11-17 22:05:03.889911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.638 [2024-11-17 22:05:04.063369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:07.638 [2024-11-17 22:05:04.063529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.203 22:05:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.203 22:05:04 -- common/autotest_common.sh@862 -- # return 0 00:05:08.203 22:05:04 -- event/cpu_locks.sh@105 -- # locks_exist 57758 00:05:08.203 22:05:04 -- event/cpu_locks.sh@22 -- # lslocks -p 57758 00:05:08.203 22:05:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.140 22:05:05 -- event/cpu_locks.sh@107 -- # killprocess 57730 00:05:09.140 22:05:05 -- common/autotest_common.sh@936 -- # '[' -z 57730 ']' 00:05:09.140 22:05:05 -- common/autotest_common.sh@940 -- # kill -0 57730 00:05:09.140 22:05:05 -- common/autotest_common.sh@941 -- # uname 00:05:09.140 22:05:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.140 22:05:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57730 00:05:09.140 22:05:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.140 killing process with pid 57730 00:05:09.140 22:05:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.140 22:05:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57730' 00:05:09.140 22:05:05 -- common/autotest_common.sh@955 -- # kill 57730 00:05:09.140 22:05:05 -- common/autotest_common.sh@960 -- # wait 57730 00:05:10.077 22:05:06 -- event/cpu_locks.sh@108 -- # killprocess 57758 00:05:10.077 22:05:06 -- common/autotest_common.sh@936 -- # '[' -z 57758 ']' 00:05:10.077 22:05:06 -- common/autotest_common.sh@940 -- # kill -0 57758 00:05:10.077 22:05:06 -- common/autotest_common.sh@941 -- # uname 00:05:10.077 22:05:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:10.077 22:05:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57758 00:05:10.336 22:05:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:10.336 22:05:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:10.336 killing process with pid 57758 00:05:10.336 22:05:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57758' 00:05:10.336 22:05:06 -- common/autotest_common.sh@955 -- # kill 57758 00:05:10.336 22:05:06 -- common/autotest_common.sh@960 -- # wait 57758 00:05:10.905 00:05:10.905 real 0m4.584s 00:05:10.905 user 0m4.873s 00:05:10.905 sys 0m1.253s 00:05:10.905 22:05:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.905 ************************************ 00:05:10.905 22:05:07 -- common/autotest_common.sh@10 -- # set +x 00:05:10.905 END TEST locking_app_on_unlocked_coremask 00:05:10.905 ************************************ 00:05:10.905 22:05:07 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:10.905 22:05:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.905 22:05:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.905 22:05:07 -- common/autotest_common.sh@10 -- # set +x 00:05:10.905 ************************************ 00:05:10.905 START TEST locking_app_on_locked_coremask 00:05:10.905 ************************************ 00:05:10.905 22:05:07 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:10.905 22:05:07 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57837 00:05:10.905 22:05:07 -- event/cpu_locks.sh@116 -- # waitforlisten 57837 /var/tmp/spdk.sock 00:05:10.905 22:05:07 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.905 22:05:07 -- common/autotest_common.sh@829 -- # '[' -z 57837 ']' 00:05:10.905 22:05:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.905 22:05:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.905 22:05:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.905 22:05:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.905 22:05:07 -- common/autotest_common.sh@10 -- # set +x 00:05:10.905 [2024-11-17 22:05:07.372612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:10.905 [2024-11-17 22:05:07.372717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57837 ] 00:05:10.905 [2024-11-17 22:05:07.508278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.164 [2024-11-17 22:05:07.602315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:11.164 [2024-11-17 22:05:07.602462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.732 22:05:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.732 22:05:08 -- common/autotest_common.sh@862 -- # return 0 00:05:11.732 22:05:08 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:11.732 22:05:08 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57865 00:05:11.732 22:05:08 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57865 /var/tmp/spdk2.sock 00:05:11.732 22:05:08 -- common/autotest_common.sh@650 -- # local es=0 00:05:11.732 22:05:08 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57865 /var/tmp/spdk2.sock 00:05:11.732 22:05:08 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:11.732 22:05:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.732 22:05:08 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:11.732 22:05:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.732 22:05:08 -- common/autotest_common.sh@653 -- # waitforlisten 57865 /var/tmp/spdk2.sock 00:05:11.732 22:05:08 -- common/autotest_common.sh@829 -- # '[' -z 57865 ']' 00:05:11.732 22:05:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.991 22:05:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.991 22:05:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.991 22:05:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.991 22:05:08 -- common/autotest_common.sh@10 -- # set +x 00:05:11.991 [2024-11-17 22:05:08.390491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.991 [2024-11-17 22:05:08.390583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57865 ] 00:05:11.991 [2024-11-17 22:05:08.522849] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57837 has claimed it. 00:05:11.991 [2024-11-17 22:05:08.522906] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:12.558 ERROR: process (pid: 57865) is no longer running 00:05:12.558 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57865) - No such process 00:05:12.558 22:05:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.558 22:05:09 -- common/autotest_common.sh@862 -- # return 1 00:05:12.558 22:05:09 -- common/autotest_common.sh@653 -- # es=1 00:05:12.558 22:05:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:12.558 22:05:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:12.558 22:05:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:12.558 22:05:09 -- event/cpu_locks.sh@122 -- # locks_exist 57837 00:05:12.558 22:05:09 -- event/cpu_locks.sh@22 -- # lslocks -p 57837 00:05:12.558 22:05:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.126 22:05:09 -- event/cpu_locks.sh@124 -- # killprocess 57837 00:05:13.126 22:05:09 -- common/autotest_common.sh@936 -- # '[' -z 57837 ']' 00:05:13.126 22:05:09 -- common/autotest_common.sh@940 -- # kill -0 57837 00:05:13.126 22:05:09 -- common/autotest_common.sh@941 -- # uname 00:05:13.126 22:05:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.126 22:05:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57837 00:05:13.126 22:05:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:13.126 killing process with pid 57837 00:05:13.126 22:05:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:13.126 22:05:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57837' 00:05:13.126 22:05:09 -- common/autotest_common.sh@955 -- # kill 57837 00:05:13.126 22:05:09 -- common/autotest_common.sh@960 -- # wait 57837 00:05:13.693 00:05:13.693 real 0m2.805s 00:05:13.693 user 0m3.123s 00:05:13.693 sys 0m0.724s 00:05:13.693 22:05:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.693 22:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.693 ************************************ 00:05:13.693 END TEST locking_app_on_locked_coremask 00:05:13.693 ************************************ 00:05:13.693 22:05:10 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:13.693 22:05:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.693 22:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.693 22:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.693 ************************************ 00:05:13.693 START TEST locking_overlapped_coremask 00:05:13.693 ************************************ 00:05:13.693 22:05:10 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:13.693 22:05:10 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57922 00:05:13.693 22:05:10 -- event/cpu_locks.sh@133 -- # waitforlisten 57922 /var/tmp/spdk.sock 00:05:13.693 22:05:10 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:13.693 22:05:10 -- common/autotest_common.sh@829 -- # '[' -z 57922 ']' 00:05:13.693 22:05:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.693 22:05:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.693 22:05:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.693 22:05:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.693 22:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.693 [2024-11-17 22:05:10.236532] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:13.694 [2024-11-17 22:05:10.236626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57922 ] 00:05:13.952 [2024-11-17 22:05:10.377992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.952 [2024-11-17 22:05:10.486524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.952 [2024-11-17 22:05:10.486873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.952 [2024-11-17 22:05:10.487167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.952 [2024-11-17 22:05:10.487174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.889 22:05:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.889 22:05:11 -- common/autotest_common.sh@862 -- # return 0 00:05:14.889 22:05:11 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57952 00:05:14.889 22:05:11 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:14.889 22:05:11 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57952 /var/tmp/spdk2.sock 00:05:14.889 22:05:11 -- common/autotest_common.sh@650 -- # local es=0 00:05:14.889 22:05:11 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57952 /var/tmp/spdk2.sock 00:05:14.889 22:05:11 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:14.889 22:05:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.889 22:05:11 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:14.889 22:05:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.889 22:05:11 -- common/autotest_common.sh@653 -- # waitforlisten 57952 /var/tmp/spdk2.sock 00:05:14.889 22:05:11 -- common/autotest_common.sh@829 -- # '[' -z 57952 ']' 00:05:14.889 22:05:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.889 22:05:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.889 22:05:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.889 22:05:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.889 22:05:11 -- common/autotest_common.sh@10 -- # set +x 00:05:14.889 [2024-11-17 22:05:11.247000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:14.889 [2024-11-17 22:05:11.247161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57952 ] 00:05:14.889 [2024-11-17 22:05:11.391084] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57922 has claimed it. 00:05:14.889 [2024-11-17 22:05:11.391186] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:15.458 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57952) - No such process 00:05:15.458 ERROR: process (pid: 57952) is no longer running 00:05:15.458 22:05:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.458 22:05:11 -- common/autotest_common.sh@862 -- # return 1 00:05:15.458 22:05:11 -- common/autotest_common.sh@653 -- # es=1 00:05:15.458 22:05:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.458 22:05:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:15.458 22:05:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.458 22:05:11 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:15.458 22:05:11 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:15.458 22:05:11 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:15.458 22:05:11 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:15.458 22:05:11 -- event/cpu_locks.sh@141 -- # killprocess 57922 00:05:15.458 22:05:11 -- common/autotest_common.sh@936 -- # '[' -z 57922 ']' 00:05:15.458 22:05:11 -- common/autotest_common.sh@940 -- # kill -0 57922 00:05:15.458 22:05:11 -- common/autotest_common.sh@941 -- # uname 00:05:15.458 22:05:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.458 22:05:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57922 00:05:15.458 22:05:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.458 22:05:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.458 killing process with pid 57922 00:05:15.458 22:05:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57922' 00:05:15.458 22:05:12 -- common/autotest_common.sh@955 -- # kill 57922 00:05:15.458 22:05:12 -- common/autotest_common.sh@960 -- # wait 57922 00:05:16.026 00:05:16.026 real 0m2.400s 00:05:16.026 user 0m6.480s 00:05:16.026 sys 0m0.564s 00:05:16.026 22:05:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.026 22:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.026 ************************************ 00:05:16.026 END TEST locking_overlapped_coremask 00:05:16.026 ************************************ 00:05:16.026 22:05:12 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:16.026 22:05:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.026 22:05:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.026 22:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.026 ************************************ 00:05:16.026 START TEST locking_overlapped_coremask_via_rpc 00:05:16.026 ************************************ 00:05:16.026 22:05:12 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:16.026 22:05:12 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=57998 00:05:16.026 22:05:12 -- event/cpu_locks.sh@149 -- # waitforlisten 57998 /var/tmp/spdk.sock 00:05:16.026 22:05:12 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:16.026 22:05:12 -- common/autotest_common.sh@829 -- # '[' -z 57998 ']' 00:05:16.026 22:05:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.026 22:05:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.026 22:05:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.026 22:05:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.026 22:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.285 [2024-11-17 22:05:12.695576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.285 [2024-11-17 22:05:12.695713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57998 ] 00:05:16.285 [2024-11-17 22:05:12.831753] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.285 [2024-11-17 22:05:12.831824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.545 [2024-11-17 22:05:12.981539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.545 [2024-11-17 22:05:12.981832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.545 [2024-11-17 22:05:12.981979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.545 [2024-11-17 22:05:12.981986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.113 22:05:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.113 22:05:13 -- common/autotest_common.sh@862 -- # return 0 00:05:17.113 22:05:13 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:17.113 22:05:13 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58028 00:05:17.113 22:05:13 -- event/cpu_locks.sh@153 -- # waitforlisten 58028 /var/tmp/spdk2.sock 00:05:17.113 22:05:13 -- common/autotest_common.sh@829 -- # '[' -z 58028 ']' 00:05:17.113 22:05:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.113 22:05:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.113 22:05:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.113 22:05:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.113 22:05:13 -- common/autotest_common.sh@10 -- # set +x 00:05:17.113 [2024-11-17 22:05:13.711333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:17.113 [2024-11-17 22:05:13.711422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58028 ] 00:05:17.371 [2024-11-17 22:05:13.847215] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.371 [2024-11-17 22:05:13.847268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.630 [2024-11-17 22:05:14.013929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.630 [2024-11-17 22:05:14.014224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.630 [2024-11-17 22:05:14.017883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:17.630 [2024-11-17 22:05:14.017894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.227 22:05:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.227 22:05:14 -- common/autotest_common.sh@862 -- # return 0 00:05:18.227 22:05:14 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.227 22:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.227 22:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:18.227 22:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.227 22:05:14 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.227 22:05:14 -- common/autotest_common.sh@650 -- # local es=0 00:05:18.227 22:05:14 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.227 22:05:14 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:18.227 22:05:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.227 22:05:14 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:18.227 22:05:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.227 22:05:14 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.227 22:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.227 22:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:18.227 [2024-11-17 22:05:14.812933] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57998 has claimed it. 00:05:18.227 2024/11/17 22:05:14 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:18.227 request: 00:05:18.227 { 00:05:18.227 "method": "framework_enable_cpumask_locks", 00:05:18.227 "params": {} 00:05:18.227 } 00:05:18.227 Got JSON-RPC error response 00:05:18.227 GoRPCClient: error on JSON-RPC call 00:05:18.227 22:05:14 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:18.227 22:05:14 -- common/autotest_common.sh@653 -- # es=1 00:05:18.227 22:05:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.227 22:05:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.227 22:05:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.227 22:05:14 -- event/cpu_locks.sh@158 -- # waitforlisten 57998 /var/tmp/spdk.sock 00:05:18.227 22:05:14 -- common/autotest_common.sh@829 -- # '[' -z 57998 ']' 00:05:18.227 22:05:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.228 22:05:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.228 22:05:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.228 22:05:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.228 22:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:18.486 22:05:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.486 22:05:15 -- common/autotest_common.sh@862 -- # return 0 00:05:18.486 22:05:15 -- event/cpu_locks.sh@159 -- # waitforlisten 58028 /var/tmp/spdk2.sock 00:05:18.486 22:05:15 -- common/autotest_common.sh@829 -- # '[' -z 58028 ']' 00:05:18.486 22:05:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.487 22:05:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.487 22:05:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.487 22:05:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.487 22:05:15 -- common/autotest_common.sh@10 -- # set +x 00:05:19.054 22:05:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.054 22:05:15 -- common/autotest_common.sh@862 -- # return 0 00:05:19.054 22:05:15 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:19.054 22:05:15 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:19.054 22:05:15 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:19.054 22:05:15 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:19.054 00:05:19.054 real 0m2.739s 00:05:19.054 user 0m1.418s 00:05:19.054 sys 0m0.241s 00:05:19.054 22:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.054 22:05:15 -- common/autotest_common.sh@10 -- # set +x 00:05:19.054 ************************************ 00:05:19.054 END TEST locking_overlapped_coremask_via_rpc 00:05:19.054 ************************************ 00:05:19.054 22:05:15 -- event/cpu_locks.sh@174 -- # cleanup 00:05:19.054 22:05:15 -- event/cpu_locks.sh@15 -- # [[ -z 57998 ]] 00:05:19.054 22:05:15 -- event/cpu_locks.sh@15 -- # killprocess 57998 00:05:19.054 22:05:15 -- common/autotest_common.sh@936 -- # '[' -z 57998 ']' 00:05:19.054 22:05:15 -- common/autotest_common.sh@940 -- # kill -0 57998 00:05:19.054 22:05:15 -- common/autotest_common.sh@941 -- # uname 00:05:19.054 22:05:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.054 22:05:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57998 00:05:19.054 22:05:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.054 22:05:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.054 killing process with pid 57998 00:05:19.054 22:05:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57998' 00:05:19.054 22:05:15 -- common/autotest_common.sh@955 -- # kill 57998 00:05:19.054 22:05:15 -- common/autotest_common.sh@960 -- # wait 57998 00:05:19.620 22:05:16 -- event/cpu_locks.sh@16 -- # [[ -z 58028 ]] 00:05:19.620 22:05:16 -- event/cpu_locks.sh@16 -- # killprocess 58028 00:05:19.620 22:05:16 -- common/autotest_common.sh@936 -- # '[' -z 58028 ']' 00:05:19.620 22:05:16 -- common/autotest_common.sh@940 -- # kill -0 58028 00:05:19.620 22:05:16 -- common/autotest_common.sh@941 -- # uname 00:05:19.620 22:05:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.620 22:05:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58028 00:05:19.620 22:05:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:19.620 22:05:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:19.620 killing process with pid 58028 00:05:19.620 22:05:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58028' 00:05:19.620 22:05:16 -- common/autotest_common.sh@955 -- # kill 58028 00:05:19.620 22:05:16 -- common/autotest_common.sh@960 -- # wait 58028 00:05:19.879 22:05:16 -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.879 22:05:16 -- event/cpu_locks.sh@1 -- # cleanup 00:05:19.879 22:05:16 -- event/cpu_locks.sh@15 -- # [[ -z 57998 ]] 00:05:19.879 22:05:16 -- event/cpu_locks.sh@15 -- # killprocess 57998 00:05:19.879 22:05:16 -- common/autotest_common.sh@936 -- # '[' -z 57998 ']' 00:05:19.879 22:05:16 -- common/autotest_common.sh@940 -- # kill -0 57998 00:05:19.879 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (57998) - No such process 00:05:19.879 Process with pid 57998 is not found 00:05:19.879 22:05:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 57998 is not found' 00:05:19.879 22:05:16 -- event/cpu_locks.sh@16 -- # [[ -z 58028 ]] 00:05:19.879 22:05:16 -- event/cpu_locks.sh@16 -- # killprocess 58028 00:05:19.879 22:05:16 -- common/autotest_common.sh@936 -- # '[' -z 58028 ']' 00:05:19.879 22:05:16 -- common/autotest_common.sh@940 -- # kill -0 58028 00:05:19.879 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58028) - No such process 00:05:19.879 Process with pid 58028 is not found 00:05:19.879 22:05:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58028 is not found' 00:05:19.879 22:05:16 -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.879 00:05:19.879 real 0m22.676s 00:05:19.879 user 0m38.766s 00:05:19.879 sys 0m6.186s 00:05:19.879 22:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.879 ************************************ 00:05:19.879 END TEST cpu_locks 00:05:19.879 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:19.879 ************************************ 00:05:20.137 00:05:20.137 real 0m51.032s 00:05:20.137 user 1m36.826s 00:05:20.137 sys 0m10.087s 00:05:20.137 22:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.137 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.137 ************************************ 00:05:20.137 END TEST event 00:05:20.137 ************************************ 00:05:20.137 22:05:16 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:20.137 22:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.137 22:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.137 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.137 ************************************ 00:05:20.137 START TEST thread 00:05:20.137 ************************************ 00:05:20.137 22:05:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:20.137 * Looking for test storage... 00:05:20.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:20.137 22:05:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:20.137 22:05:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:20.137 22:05:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:20.137 22:05:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:20.137 22:05:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:20.137 22:05:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:20.137 22:05:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:20.137 22:05:16 -- scripts/common.sh@335 -- # IFS=.-: 00:05:20.137 22:05:16 -- scripts/common.sh@335 -- # read -ra ver1 00:05:20.137 22:05:16 -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.137 22:05:16 -- scripts/common.sh@336 -- # read -ra ver2 00:05:20.137 22:05:16 -- scripts/common.sh@337 -- # local 'op=<' 00:05:20.137 22:05:16 -- scripts/common.sh@339 -- # ver1_l=2 00:05:20.138 22:05:16 -- scripts/common.sh@340 -- # ver2_l=1 00:05:20.138 22:05:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:20.138 22:05:16 -- scripts/common.sh@343 -- # case "$op" in 00:05:20.138 22:05:16 -- scripts/common.sh@344 -- # : 1 00:05:20.138 22:05:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:20.138 22:05:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.138 22:05:16 -- scripts/common.sh@364 -- # decimal 1 00:05:20.138 22:05:16 -- scripts/common.sh@352 -- # local d=1 00:05:20.138 22:05:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.138 22:05:16 -- scripts/common.sh@354 -- # echo 1 00:05:20.138 22:05:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:20.138 22:05:16 -- scripts/common.sh@365 -- # decimal 2 00:05:20.138 22:05:16 -- scripts/common.sh@352 -- # local d=2 00:05:20.138 22:05:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.138 22:05:16 -- scripts/common.sh@354 -- # echo 2 00:05:20.138 22:05:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:20.138 22:05:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:20.138 22:05:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:20.138 22:05:16 -- scripts/common.sh@367 -- # return 0 00:05:20.138 22:05:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.138 22:05:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:20.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.138 --rc genhtml_branch_coverage=1 00:05:20.138 --rc genhtml_function_coverage=1 00:05:20.138 --rc genhtml_legend=1 00:05:20.138 --rc geninfo_all_blocks=1 00:05:20.138 --rc geninfo_unexecuted_blocks=1 00:05:20.138 00:05:20.138 ' 00:05:20.138 22:05:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:20.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.138 --rc genhtml_branch_coverage=1 00:05:20.138 --rc genhtml_function_coverage=1 00:05:20.138 --rc genhtml_legend=1 00:05:20.138 --rc geninfo_all_blocks=1 00:05:20.138 --rc geninfo_unexecuted_blocks=1 00:05:20.138 00:05:20.138 ' 00:05:20.138 22:05:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:20.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.138 --rc genhtml_branch_coverage=1 00:05:20.138 --rc genhtml_function_coverage=1 00:05:20.138 --rc genhtml_legend=1 00:05:20.138 --rc geninfo_all_blocks=1 00:05:20.138 --rc geninfo_unexecuted_blocks=1 00:05:20.138 00:05:20.138 ' 00:05:20.138 22:05:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:20.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.138 --rc genhtml_branch_coverage=1 00:05:20.138 --rc genhtml_function_coverage=1 00:05:20.138 --rc genhtml_legend=1 00:05:20.138 --rc geninfo_all_blocks=1 00:05:20.138 --rc geninfo_unexecuted_blocks=1 00:05:20.138 00:05:20.138 ' 00:05:20.138 22:05:16 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:20.138 22:05:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:20.138 22:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.138 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.138 ************************************ 00:05:20.138 START TEST thread_poller_perf 00:05:20.138 ************************************ 00:05:20.138 22:05:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:20.396 [2024-11-17 22:05:16.766905] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.396 [2024-11-17 22:05:16.767025] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58187 ] 00:05:20.396 [2024-11-17 22:05:16.900394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.655 [2024-11-17 22:05:17.050323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.655 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:21.590 [2024-11-17T22:05:18.205Z] ====================================== 00:05:21.590 [2024-11-17T22:05:18.205Z] busy:2210564646 (cyc) 00:05:21.590 [2024-11-17T22:05:18.205Z] total_run_count: 385000 00:05:21.590 [2024-11-17T22:05:18.205Z] tsc_hz: 2200000000 (cyc) 00:05:21.590 [2024-11-17T22:05:18.206Z] ====================================== 00:05:21.591 [2024-11-17T22:05:18.206Z] poller_cost: 5741 (cyc), 2609 (nsec) 00:05:21.591 00:05:21.591 real 0m1.425s 00:05:21.591 user 0m1.249s 00:05:21.591 sys 0m0.068s 00:05:21.591 22:05:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.591 ************************************ 00:05:21.591 END TEST thread_poller_perf 00:05:21.591 ************************************ 00:05:21.591 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:05:21.850 22:05:18 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:21.850 22:05:18 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:21.850 22:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.850 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:05:21.850 ************************************ 00:05:21.850 START TEST thread_poller_perf 00:05:21.850 ************************************ 00:05:21.850 22:05:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:21.850 [2024-11-17 22:05:18.241267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.850 [2024-11-17 22:05:18.241375] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58228 ] 00:05:21.850 [2024-11-17 22:05:18.376979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.109 [2024-11-17 22:05:18.472927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.109 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:23.054 [2024-11-17T22:05:19.669Z] ====================================== 00:05:23.054 [2024-11-17T22:05:19.669Z] busy:2202740980 (cyc) 00:05:23.054 [2024-11-17T22:05:19.669Z] total_run_count: 5297000 00:05:23.054 [2024-11-17T22:05:19.669Z] tsc_hz: 2200000000 (cyc) 00:05:23.054 [2024-11-17T22:05:19.669Z] ====================================== 00:05:23.054 [2024-11-17T22:05:19.669Z] poller_cost: 415 (cyc), 188 (nsec) 00:05:23.054 00:05:23.054 real 0m1.362s 00:05:23.054 user 0m1.202s 00:05:23.054 sys 0m0.053s 00:05:23.055 22:05:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.055 22:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.055 ************************************ 00:05:23.055 END TEST thread_poller_perf 00:05:23.055 ************************************ 00:05:23.055 22:05:19 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:23.055 ************************************ 00:05:23.055 END TEST thread 00:05:23.055 ************************************ 00:05:23.055 00:05:23.055 real 0m3.063s 00:05:23.055 user 0m2.576s 00:05:23.055 sys 0m0.275s 00:05:23.055 22:05:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.055 22:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.313 22:05:19 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:23.313 22:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.313 22:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.313 22:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.313 ************************************ 00:05:23.313 START TEST accel 00:05:23.313 ************************************ 00:05:23.313 22:05:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:23.313 * Looking for test storage... 00:05:23.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:23.313 22:05:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:23.313 22:05:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:23.313 22:05:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:23.313 22:05:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:23.313 22:05:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:23.313 22:05:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:23.313 22:05:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:23.313 22:05:19 -- scripts/common.sh@335 -- # IFS=.-: 00:05:23.313 22:05:19 -- scripts/common.sh@335 -- # read -ra ver1 00:05:23.313 22:05:19 -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.313 22:05:19 -- scripts/common.sh@336 -- # read -ra ver2 00:05:23.313 22:05:19 -- scripts/common.sh@337 -- # local 'op=<' 00:05:23.313 22:05:19 -- scripts/common.sh@339 -- # ver1_l=2 00:05:23.313 22:05:19 -- scripts/common.sh@340 -- # ver2_l=1 00:05:23.313 22:05:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:23.313 22:05:19 -- scripts/common.sh@343 -- # case "$op" in 00:05:23.313 22:05:19 -- scripts/common.sh@344 -- # : 1 00:05:23.313 22:05:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:23.313 22:05:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.313 22:05:19 -- scripts/common.sh@364 -- # decimal 1 00:05:23.313 22:05:19 -- scripts/common.sh@352 -- # local d=1 00:05:23.313 22:05:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.313 22:05:19 -- scripts/common.sh@354 -- # echo 1 00:05:23.313 22:05:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:23.313 22:05:19 -- scripts/common.sh@365 -- # decimal 2 00:05:23.313 22:05:19 -- scripts/common.sh@352 -- # local d=2 00:05:23.314 22:05:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.314 22:05:19 -- scripts/common.sh@354 -- # echo 2 00:05:23.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.314 22:05:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:23.314 22:05:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:23.314 22:05:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:23.314 22:05:19 -- scripts/common.sh@367 -- # return 0 00:05:23.314 22:05:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.314 22:05:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:23.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.314 --rc genhtml_branch_coverage=1 00:05:23.314 --rc genhtml_function_coverage=1 00:05:23.314 --rc genhtml_legend=1 00:05:23.314 --rc geninfo_all_blocks=1 00:05:23.314 --rc geninfo_unexecuted_blocks=1 00:05:23.314 00:05:23.314 ' 00:05:23.314 22:05:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:23.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.314 --rc genhtml_branch_coverage=1 00:05:23.314 --rc genhtml_function_coverage=1 00:05:23.314 --rc genhtml_legend=1 00:05:23.314 --rc geninfo_all_blocks=1 00:05:23.314 --rc geninfo_unexecuted_blocks=1 00:05:23.314 00:05:23.314 ' 00:05:23.314 22:05:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:23.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.314 --rc genhtml_branch_coverage=1 00:05:23.314 --rc genhtml_function_coverage=1 00:05:23.314 --rc genhtml_legend=1 00:05:23.314 --rc geninfo_all_blocks=1 00:05:23.314 --rc geninfo_unexecuted_blocks=1 00:05:23.314 00:05:23.314 ' 00:05:23.314 22:05:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:23.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.314 --rc genhtml_branch_coverage=1 00:05:23.314 --rc genhtml_function_coverage=1 00:05:23.314 --rc genhtml_legend=1 00:05:23.314 --rc geninfo_all_blocks=1 00:05:23.314 --rc geninfo_unexecuted_blocks=1 00:05:23.314 00:05:23.314 ' 00:05:23.314 22:05:19 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:23.314 22:05:19 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:23.314 22:05:19 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.314 22:05:19 -- accel/accel.sh@59 -- # spdk_tgt_pid=58304 00:05:23.314 22:05:19 -- accel/accel.sh@60 -- # waitforlisten 58304 00:05:23.314 22:05:19 -- common/autotest_common.sh@829 -- # '[' -z 58304 ']' 00:05:23.314 22:05:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.314 22:05:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.314 22:05:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.314 22:05:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.314 22:05:19 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:23.314 22:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.314 22:05:19 -- accel/accel.sh@58 -- # build_accel_config 00:05:23.314 22:05:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.314 22:05:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.314 22:05:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.314 22:05:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.314 22:05:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.314 22:05:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.314 22:05:19 -- accel/accel.sh@42 -- # jq -r . 00:05:23.573 [2024-11-17 22:05:19.941257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.573 [2024-11-17 22:05:19.941498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58304 ] 00:05:23.573 [2024-11-17 22:05:20.079803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.573 [2024-11-17 22:05:20.169720] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.573 [2024-11-17 22:05:20.170247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.509 22:05:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.509 22:05:20 -- common/autotest_common.sh@862 -- # return 0 00:05:24.509 22:05:20 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:24.509 22:05:20 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:24.509 22:05:20 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:24.509 22:05:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.509 22:05:20 -- common/autotest_common.sh@10 -- # set +x 00:05:24.509 22:05:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # IFS== 00:05:24.509 22:05:21 -- accel/accel.sh@64 -- # read -r opc module 00:05:24.509 22:05:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:24.509 22:05:21 -- accel/accel.sh@67 -- # killprocess 58304 00:05:24.509 22:05:21 -- common/autotest_common.sh@936 -- # '[' -z 58304 ']' 00:05:24.509 22:05:21 -- common/autotest_common.sh@940 -- # kill -0 58304 00:05:24.509 22:05:21 -- common/autotest_common.sh@941 -- # uname 00:05:24.509 22:05:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.509 22:05:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58304 00:05:24.509 22:05:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.509 22:05:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.509 22:05:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58304' 00:05:24.509 killing process with pid 58304 00:05:24.509 22:05:21 -- common/autotest_common.sh@955 -- # kill 58304 00:05:24.509 22:05:21 -- common/autotest_common.sh@960 -- # wait 58304 00:05:25.078 22:05:21 -- accel/accel.sh@68 -- # trap - ERR 00:05:25.078 22:05:21 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:25.078 22:05:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:25.078 22:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.078 22:05:21 -- common/autotest_common.sh@10 -- # set +x 00:05:25.078 22:05:21 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:25.078 22:05:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:25.078 22:05:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.078 22:05:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.078 22:05:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.078 22:05:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.078 22:05:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.078 22:05:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.078 22:05:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.078 22:05:21 -- accel/accel.sh@42 -- # jq -r . 00:05:25.078 22:05:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.078 22:05:21 -- common/autotest_common.sh@10 -- # set +x 00:05:25.337 22:05:21 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:25.337 22:05:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:25.337 22:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.337 22:05:21 -- common/autotest_common.sh@10 -- # set +x 00:05:25.337 ************************************ 00:05:25.337 START TEST accel_missing_filename 00:05:25.337 ************************************ 00:05:25.337 22:05:21 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:25.337 22:05:21 -- common/autotest_common.sh@650 -- # local es=0 00:05:25.337 22:05:21 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:25.337 22:05:21 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:25.337 22:05:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.337 22:05:21 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:25.337 22:05:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.337 22:05:21 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:25.337 22:05:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:25.337 22:05:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.337 22:05:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.337 22:05:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.337 22:05:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.337 22:05:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.337 22:05:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.337 22:05:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.337 22:05:21 -- accel/accel.sh@42 -- # jq -r . 00:05:25.337 [2024-11-17 22:05:21.767827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.337 [2024-11-17 22:05:21.767930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58379 ] 00:05:25.337 [2024-11-17 22:05:21.903597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.596 [2024-11-17 22:05:22.000563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.596 [2024-11-17 22:05:22.073055] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.596 [2024-11-17 22:05:22.186719] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:25.856 A filename is required. 00:05:25.856 22:05:22 -- common/autotest_common.sh@653 -- # es=234 00:05:25.856 22:05:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:25.856 22:05:22 -- common/autotest_common.sh@662 -- # es=106 00:05:25.856 22:05:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:25.856 22:05:22 -- common/autotest_common.sh@670 -- # es=1 00:05:25.856 ************************************ 00:05:25.856 END TEST accel_missing_filename 00:05:25.856 ************************************ 00:05:25.856 22:05:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:25.856 00:05:25.856 real 0m0.571s 00:05:25.856 user 0m0.382s 00:05:25.856 sys 0m0.137s 00:05:25.856 22:05:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.856 22:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.856 22:05:22 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:25.856 22:05:22 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:25.856 22:05:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.856 22:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.856 ************************************ 00:05:25.856 START TEST accel_compress_verify 00:05:25.856 ************************************ 00:05:25.856 22:05:22 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:25.856 22:05:22 -- common/autotest_common.sh@650 -- # local es=0 00:05:25.856 22:05:22 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:25.856 22:05:22 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:25.856 22:05:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.856 22:05:22 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:25.856 22:05:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.856 22:05:22 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:25.856 22:05:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:25.856 22:05:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.856 22:05:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.856 22:05:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.856 22:05:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.856 22:05:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.856 22:05:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.856 22:05:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.856 22:05:22 -- accel/accel.sh@42 -- # jq -r . 00:05:25.856 [2024-11-17 22:05:22.393241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.856 [2024-11-17 22:05:22.393341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58409 ] 00:05:26.116 [2024-11-17 22:05:22.530310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.116 [2024-11-17 22:05:22.613112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.116 [2024-11-17 22:05:22.684117] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.375 [2024-11-17 22:05:22.791676] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:26.375 00:05:26.375 Compression does not support the verify option, aborting. 00:05:26.375 22:05:22 -- common/autotest_common.sh@653 -- # es=161 00:05:26.375 22:05:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.375 22:05:22 -- common/autotest_common.sh@662 -- # es=33 00:05:26.375 22:05:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:26.375 22:05:22 -- common/autotest_common.sh@670 -- # es=1 00:05:26.375 22:05:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.375 00:05:26.375 real 0m0.545s 00:05:26.375 user 0m0.346s 00:05:26.375 sys 0m0.138s 00:05:26.375 ************************************ 00:05:26.375 END TEST accel_compress_verify 00:05:26.375 ************************************ 00:05:26.375 22:05:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.375 22:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:26.375 22:05:22 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:26.375 22:05:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:26.375 22:05:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.375 22:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:26.375 ************************************ 00:05:26.375 START TEST accel_wrong_workload 00:05:26.375 ************************************ 00:05:26.375 22:05:22 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:26.375 22:05:22 -- common/autotest_common.sh@650 -- # local es=0 00:05:26.375 22:05:22 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:26.375 22:05:22 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:26.375 22:05:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.375 22:05:22 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:26.375 22:05:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.375 22:05:22 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:26.375 22:05:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:26.375 22:05:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.375 22:05:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.375 22:05:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.375 22:05:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.375 22:05:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.375 22:05:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.375 22:05:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.375 22:05:22 -- accel/accel.sh@42 -- # jq -r . 00:05:26.375 Unsupported workload type: foobar 00:05:26.375 [2024-11-17 22:05:22.988375] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:26.634 accel_perf options: 00:05:26.634 [-h help message] 00:05:26.634 [-q queue depth per core] 00:05:26.634 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:26.634 [-T number of threads per core 00:05:26.634 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:26.634 [-t time in seconds] 00:05:26.634 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:26.634 [ dif_verify, , dif_generate, dif_generate_copy 00:05:26.634 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:26.634 [-l for compress/decompress workloads, name of uncompressed input file 00:05:26.635 [-S for crc32c workload, use this seed value (default 0) 00:05:26.635 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:26.635 [-f for fill workload, use this BYTE value (default 255) 00:05:26.635 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:26.635 [-y verify result if this switch is on] 00:05:26.635 [-a tasks to allocate per core (default: same value as -q)] 00:05:26.635 Can be used to spread operations across a wider range of memory. 00:05:26.635 22:05:22 -- common/autotest_common.sh@653 -- # es=1 00:05:26.635 22:05:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.635 22:05:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.635 22:05:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.635 00:05:26.635 real 0m0.029s 00:05:26.635 user 0m0.014s 00:05:26.635 sys 0m0.015s 00:05:26.635 22:05:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.635 22:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:26.635 ************************************ 00:05:26.635 END TEST accel_wrong_workload 00:05:26.635 ************************************ 00:05:26.635 22:05:23 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:26.635 22:05:23 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:26.635 22:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.635 22:05:23 -- common/autotest_common.sh@10 -- # set +x 00:05:26.635 ************************************ 00:05:26.635 START TEST accel_negative_buffers 00:05:26.635 ************************************ 00:05:26.635 22:05:23 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:26.635 22:05:23 -- common/autotest_common.sh@650 -- # local es=0 00:05:26.635 22:05:23 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:26.635 22:05:23 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:26.635 22:05:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.635 22:05:23 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:26.635 22:05:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.635 22:05:23 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:26.635 22:05:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:26.635 22:05:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.635 22:05:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.635 22:05:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.635 22:05:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.635 22:05:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.635 22:05:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.635 22:05:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.635 22:05:23 -- accel/accel.sh@42 -- # jq -r . 00:05:26.635 -x option must be non-negative. 00:05:26.635 [2024-11-17 22:05:23.067648] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:26.635 accel_perf options: 00:05:26.635 [-h help message] 00:05:26.635 [-q queue depth per core] 00:05:26.635 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:26.635 [-T number of threads per core 00:05:26.635 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:26.635 [-t time in seconds] 00:05:26.635 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:26.635 [ dif_verify, , dif_generate, dif_generate_copy 00:05:26.635 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:26.635 [-l for compress/decompress workloads, name of uncompressed input file 00:05:26.635 [-S for crc32c workload, use this seed value (default 0) 00:05:26.635 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:26.635 [-f for fill workload, use this BYTE value (default 255) 00:05:26.635 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:26.635 [-y verify result if this switch is on] 00:05:26.635 [-a tasks to allocate per core (default: same value as -q)] 00:05:26.635 Can be used to spread operations across a wider range of memory. 00:05:26.635 22:05:23 -- common/autotest_common.sh@653 -- # es=1 00:05:26.635 22:05:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.635 22:05:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.635 22:05:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.635 00:05:26.635 real 0m0.032s 00:05:26.635 user 0m0.020s 00:05:26.635 sys 0m0.012s 00:05:26.635 22:05:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.635 22:05:23 -- common/autotest_common.sh@10 -- # set +x 00:05:26.635 ************************************ 00:05:26.635 END TEST accel_negative_buffers 00:05:26.635 ************************************ 00:05:26.635 22:05:23 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:26.635 22:05:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:26.635 22:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.635 22:05:23 -- common/autotest_common.sh@10 -- # set +x 00:05:26.635 ************************************ 00:05:26.635 START TEST accel_crc32c 00:05:26.635 ************************************ 00:05:26.635 22:05:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:26.635 22:05:23 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.635 22:05:23 -- accel/accel.sh@17 -- # local accel_module 00:05:26.635 22:05:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:26.635 22:05:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:26.635 22:05:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.635 22:05:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.635 22:05:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.635 22:05:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.635 22:05:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.635 22:05:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.635 22:05:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.635 22:05:23 -- accel/accel.sh@42 -- # jq -r . 00:05:26.635 [2024-11-17 22:05:23.151197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.635 [2024-11-17 22:05:23.151293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58468 ] 00:05:26.894 [2024-11-17 22:05:23.286389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.894 [2024-11-17 22:05:23.374043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.271 22:05:24 -- accel/accel.sh@18 -- # out=' 00:05:28.271 SPDK Configuration: 00:05:28.271 Core mask: 0x1 00:05:28.271 00:05:28.271 Accel Perf Configuration: 00:05:28.271 Workload Type: crc32c 00:05:28.271 CRC-32C seed: 32 00:05:28.271 Transfer size: 4096 bytes 00:05:28.271 Vector count 1 00:05:28.271 Module: software 00:05:28.271 Queue depth: 32 00:05:28.271 Allocate depth: 32 00:05:28.271 # threads/core: 1 00:05:28.271 Run time: 1 seconds 00:05:28.271 Verify: Yes 00:05:28.271 00:05:28.271 Running for 1 seconds... 00:05:28.271 00:05:28.271 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:28.271 ------------------------------------------------------------------------------------ 00:05:28.271 0,0 561600/s 2193 MiB/s 0 0 00:05:28.271 ==================================================================================== 00:05:28.271 Total 561600/s 2193 MiB/s 0 0' 00:05:28.271 22:05:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:28.271 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.271 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.271 22:05:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:28.271 22:05:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.271 22:05:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.271 22:05:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.271 22:05:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.271 22:05:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.271 22:05:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.271 22:05:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.271 22:05:24 -- accel/accel.sh@42 -- # jq -r . 00:05:28.271 [2024-11-17 22:05:24.689110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.271 [2024-11-17 22:05:24.689185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58487 ] 00:05:28.271 [2024-11-17 22:05:24.818707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.530 [2024-11-17 22:05:24.898652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.530 22:05:24 -- accel/accel.sh@21 -- # val= 00:05:28.530 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.530 22:05:24 -- accel/accel.sh@21 -- # val= 00:05:28.530 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.530 22:05:24 -- accel/accel.sh@21 -- # val=0x1 00:05:28.530 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.530 22:05:24 -- accel/accel.sh@21 -- # val= 00:05:28.530 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.530 22:05:24 -- accel/accel.sh@21 -- # val= 00:05:28.530 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.530 22:05:24 -- accel/accel.sh@21 -- # val=crc32c 00:05:28.530 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.530 22:05:24 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.530 22:05:24 -- accel/accel.sh@21 -- # val=32 00:05:28.530 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.530 22:05:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:28.530 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.530 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.531 22:05:24 -- accel/accel.sh@21 -- # val= 00:05:28.531 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.531 22:05:24 -- accel/accel.sh@21 -- # val=software 00:05:28.531 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.531 22:05:24 -- accel/accel.sh@23 -- # accel_module=software 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.531 22:05:24 -- accel/accel.sh@21 -- # val=32 00:05:28.531 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.531 22:05:24 -- accel/accel.sh@21 -- # val=32 00:05:28.531 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.531 22:05:24 -- accel/accel.sh@21 -- # val=1 00:05:28.531 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.531 22:05:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:28.531 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.531 22:05:24 -- accel/accel.sh@21 -- # val=Yes 00:05:28.531 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.531 22:05:24 -- accel/accel.sh@21 -- # val= 00:05:28.531 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:28.531 22:05:24 -- accel/accel.sh@21 -- # val= 00:05:28.531 22:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:05:28.531 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:05:29.908 22:05:26 -- accel/accel.sh@21 -- # val= 00:05:29.908 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:05:29.908 22:05:26 -- accel/accel.sh@21 -- # val= 00:05:29.908 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:05:29.908 22:05:26 -- accel/accel.sh@21 -- # val= 00:05:29.908 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:05:29.908 22:05:26 -- accel/accel.sh@21 -- # val= 00:05:29.908 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:05:29.908 22:05:26 -- accel/accel.sh@21 -- # val= 00:05:29.908 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:05:29.908 22:05:26 -- accel/accel.sh@21 -- # val= 00:05:29.908 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:05:29.908 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:05:29.908 22:05:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:29.908 22:05:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:29.908 22:05:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.908 00:05:29.908 real 0m3.077s 00:05:29.908 user 0m2.614s 00:05:29.908 sys 0m0.263s 00:05:29.908 22:05:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.908 22:05:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.908 ************************************ 00:05:29.908 END TEST accel_crc32c 00:05:29.908 ************************************ 00:05:29.908 22:05:26 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:29.908 22:05:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:29.908 22:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.908 22:05:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.908 ************************************ 00:05:29.908 START TEST accel_crc32c_C2 00:05:29.908 ************************************ 00:05:29.908 22:05:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:29.908 22:05:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:29.908 22:05:26 -- accel/accel.sh@17 -- # local accel_module 00:05:29.908 22:05:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:29.908 22:05:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:29.908 22:05:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.908 22:05:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.908 22:05:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.908 22:05:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.908 22:05:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.908 22:05:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.908 22:05:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.908 22:05:26 -- accel/accel.sh@42 -- # jq -r . 00:05:29.908 [2024-11-17 22:05:26.278959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.908 [2024-11-17 22:05:26.279051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58522 ] 00:05:29.908 [2024-11-17 22:05:26.414130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.908 [2024-11-17 22:05:26.515593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.286 22:05:27 -- accel/accel.sh@18 -- # out=' 00:05:31.286 SPDK Configuration: 00:05:31.286 Core mask: 0x1 00:05:31.286 00:05:31.286 Accel Perf Configuration: 00:05:31.286 Workload Type: crc32c 00:05:31.286 CRC-32C seed: 0 00:05:31.286 Transfer size: 4096 bytes 00:05:31.286 Vector count 2 00:05:31.286 Module: software 00:05:31.286 Queue depth: 32 00:05:31.286 Allocate depth: 32 00:05:31.286 # threads/core: 1 00:05:31.286 Run time: 1 seconds 00:05:31.286 Verify: Yes 00:05:31.286 00:05:31.286 Running for 1 seconds... 00:05:31.286 00:05:31.286 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:31.286 ------------------------------------------------------------------------------------ 00:05:31.286 0,0 436032/s 3406 MiB/s 0 0 00:05:31.286 ==================================================================================== 00:05:31.286 Total 436032/s 1703 MiB/s 0 0' 00:05:31.286 22:05:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:31.286 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:05:31.286 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:05:31.286 22:05:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:31.286 22:05:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.286 22:05:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.286 22:05:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.286 22:05:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.286 22:05:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.286 22:05:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.286 22:05:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.286 22:05:27 -- accel/accel.sh@42 -- # jq -r . 00:05:31.286 [2024-11-17 22:05:27.836785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.286 [2024-11-17 22:05:27.836861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58541 ] 00:05:31.546 [2024-11-17 22:05:27.968718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.546 [2024-11-17 22:05:28.047073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val= 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val= 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val=0x1 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val= 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val= 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val=crc32c 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val=0 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val= 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val=software 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@23 -- # accel_module=software 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val=32 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val=32 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val=1 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val=Yes 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val= 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:31.546 22:05:28 -- accel/accel.sh@21 -- # val= 00:05:31.546 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:05:31.546 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:05:32.933 22:05:29 -- accel/accel.sh@21 -- # val= 00:05:32.933 22:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # IFS=: 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # read -r var val 00:05:32.933 22:05:29 -- accel/accel.sh@21 -- # val= 00:05:32.933 22:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # IFS=: 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # read -r var val 00:05:32.933 22:05:29 -- accel/accel.sh@21 -- # val= 00:05:32.933 22:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # IFS=: 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # read -r var val 00:05:32.933 22:05:29 -- accel/accel.sh@21 -- # val= 00:05:32.933 22:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # IFS=: 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # read -r var val 00:05:32.933 22:05:29 -- accel/accel.sh@21 -- # val= 00:05:32.933 22:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # IFS=: 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # read -r var val 00:05:32.933 22:05:29 -- accel/accel.sh@21 -- # val= 00:05:32.933 22:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # IFS=: 00:05:32.933 22:05:29 -- accel/accel.sh@20 -- # read -r var val 00:05:32.933 22:05:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:32.933 22:05:29 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:32.933 22:05:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.933 00:05:32.933 real 0m3.104s 00:05:32.933 user 0m2.634s 00:05:32.933 sys 0m0.265s 00:05:32.933 22:05:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.933 22:05:29 -- common/autotest_common.sh@10 -- # set +x 00:05:32.933 ************************************ 00:05:32.933 END TEST accel_crc32c_C2 00:05:32.933 ************************************ 00:05:32.933 22:05:29 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:32.933 22:05:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:32.933 22:05:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.933 22:05:29 -- common/autotest_common.sh@10 -- # set +x 00:05:32.933 ************************************ 00:05:32.933 START TEST accel_copy 00:05:32.933 ************************************ 00:05:32.933 22:05:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:32.933 22:05:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:32.933 22:05:29 -- accel/accel.sh@17 -- # local accel_module 00:05:32.933 22:05:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:32.933 22:05:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:32.933 22:05:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.933 22:05:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.933 22:05:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.933 22:05:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.933 22:05:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.933 22:05:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.933 22:05:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.933 22:05:29 -- accel/accel.sh@42 -- # jq -r . 00:05:32.933 [2024-11-17 22:05:29.446695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.933 [2024-11-17 22:05:29.446849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58576 ] 00:05:33.227 [2024-11-17 22:05:29.583644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.227 [2024-11-17 22:05:29.664912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.649 22:05:30 -- accel/accel.sh@18 -- # out=' 00:05:34.649 SPDK Configuration: 00:05:34.649 Core mask: 0x1 00:05:34.649 00:05:34.649 Accel Perf Configuration: 00:05:34.649 Workload Type: copy 00:05:34.649 Transfer size: 4096 bytes 00:05:34.649 Vector count 1 00:05:34.649 Module: software 00:05:34.649 Queue depth: 32 00:05:34.649 Allocate depth: 32 00:05:34.649 # threads/core: 1 00:05:34.649 Run time: 1 seconds 00:05:34.649 Verify: Yes 00:05:34.649 00:05:34.649 Running for 1 seconds... 00:05:34.649 00:05:34.649 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:34.649 ------------------------------------------------------------------------------------ 00:05:34.649 0,0 386880/s 1511 MiB/s 0 0 00:05:34.649 ==================================================================================== 00:05:34.649 Total 386880/s 1511 MiB/s 0 0' 00:05:34.649 22:05:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:34.649 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:05:34.649 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:05:34.649 22:05:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:34.649 22:05:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.649 22:05:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.649 22:05:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.649 22:05:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.649 22:05:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.649 22:05:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.649 22:05:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.649 22:05:30 -- accel/accel.sh@42 -- # jq -r . 00:05:34.649 [2024-11-17 22:05:30.985483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.649 [2024-11-17 22:05:30.985559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58595 ] 00:05:34.649 [2024-11-17 22:05:31.114600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.649 [2024-11-17 22:05:31.201675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val= 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val= 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val=0x1 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val= 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val= 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val=copy 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val= 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val=software 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@23 -- # accel_module=software 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val=32 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val=32 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val=1 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val=Yes 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val= 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:34.908 22:05:31 -- accel/accel.sh@21 -- # val= 00:05:34.908 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:05:34.908 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:05:36.287 22:05:32 -- accel/accel.sh@21 -- # val= 00:05:36.287 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:05:36.287 22:05:32 -- accel/accel.sh@21 -- # val= 00:05:36.287 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:05:36.287 22:05:32 -- accel/accel.sh@21 -- # val= 00:05:36.287 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:05:36.287 22:05:32 -- accel/accel.sh@21 -- # val= 00:05:36.287 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:05:36.287 22:05:32 -- accel/accel.sh@21 -- # val= 00:05:36.287 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:05:36.287 22:05:32 -- accel/accel.sh@21 -- # val= 00:05:36.287 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:05:36.287 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:05:36.287 22:05:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:36.287 22:05:32 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:36.287 22:05:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.287 00:05:36.287 real 0m3.088s 00:05:36.287 user 0m2.616s 00:05:36.287 sys 0m0.269s 00:05:36.287 22:05:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.287 ************************************ 00:05:36.287 END TEST accel_copy 00:05:36.287 ************************************ 00:05:36.287 22:05:32 -- common/autotest_common.sh@10 -- # set +x 00:05:36.287 22:05:32 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.287 22:05:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:36.287 22:05:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.287 22:05:32 -- common/autotest_common.sh@10 -- # set +x 00:05:36.287 ************************************ 00:05:36.287 START TEST accel_fill 00:05:36.287 ************************************ 00:05:36.287 22:05:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.287 22:05:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.287 22:05:32 -- accel/accel.sh@17 -- # local accel_module 00:05:36.287 22:05:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.287 22:05:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.287 22:05:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.287 22:05:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.287 22:05:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.287 22:05:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.287 22:05:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.287 22:05:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.287 22:05:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.287 22:05:32 -- accel/accel.sh@42 -- # jq -r . 00:05:36.287 [2024-11-17 22:05:32.589924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.287 [2024-11-17 22:05:32.590183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58631 ] 00:05:36.287 [2024-11-17 22:05:32.730193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.287 [2024-11-17 22:05:32.837840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.663 22:05:34 -- accel/accel.sh@18 -- # out=' 00:05:37.663 SPDK Configuration: 00:05:37.663 Core mask: 0x1 00:05:37.663 00:05:37.663 Accel Perf Configuration: 00:05:37.663 Workload Type: fill 00:05:37.663 Fill pattern: 0x80 00:05:37.663 Transfer size: 4096 bytes 00:05:37.663 Vector count 1 00:05:37.663 Module: software 00:05:37.663 Queue depth: 64 00:05:37.663 Allocate depth: 64 00:05:37.663 # threads/core: 1 00:05:37.663 Run time: 1 seconds 00:05:37.663 Verify: Yes 00:05:37.663 00:05:37.663 Running for 1 seconds... 00:05:37.663 00:05:37.663 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:37.663 ------------------------------------------------------------------------------------ 00:05:37.663 0,0 571776/s 2233 MiB/s 0 0 00:05:37.663 ==================================================================================== 00:05:37.663 Total 571776/s 2233 MiB/s 0 0' 00:05:37.663 22:05:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:37.663 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.663 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.663 22:05:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:37.663 22:05:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.663 22:05:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.663 22:05:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.663 22:05:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.663 22:05:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.663 22:05:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.663 22:05:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.663 22:05:34 -- accel/accel.sh@42 -- # jq -r . 00:05:37.663 [2024-11-17 22:05:34.164493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.663 [2024-11-17 22:05:34.164569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58649 ] 00:05:37.922 [2024-11-17 22:05:34.295237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.922 [2024-11-17 22:05:34.378943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val= 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val= 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val=0x1 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val= 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val= 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val=fill 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val=0x80 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val= 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.922 22:05:34 -- accel/accel.sh@21 -- # val=software 00:05:37.922 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.922 22:05:34 -- accel/accel.sh@23 -- # accel_module=software 00:05:37.922 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.923 22:05:34 -- accel/accel.sh@21 -- # val=64 00:05:37.923 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.923 22:05:34 -- accel/accel.sh@21 -- # val=64 00:05:37.923 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.923 22:05:34 -- accel/accel.sh@21 -- # val=1 00:05:37.923 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.923 22:05:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:37.923 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.923 22:05:34 -- accel/accel.sh@21 -- # val=Yes 00:05:37.923 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.923 22:05:34 -- accel/accel.sh@21 -- # val= 00:05:37.923 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:37.923 22:05:34 -- accel/accel.sh@21 -- # val= 00:05:37.923 22:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # IFS=: 00:05:37.923 22:05:34 -- accel/accel.sh@20 -- # read -r var val 00:05:39.301 22:05:35 -- accel/accel.sh@21 -- # val= 00:05:39.301 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:05:39.301 22:05:35 -- accel/accel.sh@21 -- # val= 00:05:39.301 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:05:39.301 22:05:35 -- accel/accel.sh@21 -- # val= 00:05:39.301 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:05:39.301 22:05:35 -- accel/accel.sh@21 -- # val= 00:05:39.301 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:05:39.301 22:05:35 -- accel/accel.sh@21 -- # val= 00:05:39.301 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:05:39.301 22:05:35 -- accel/accel.sh@21 -- # val= 00:05:39.301 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:05:39.301 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:05:39.301 22:05:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:39.301 22:05:35 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:39.301 22:05:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.301 00:05:39.301 real 0m3.119s 00:05:39.301 user 0m2.640s 00:05:39.301 sys 0m0.277s 00:05:39.301 22:05:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.301 22:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:39.301 ************************************ 00:05:39.301 END TEST accel_fill 00:05:39.301 ************************************ 00:05:39.301 22:05:35 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:39.301 22:05:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:39.301 22:05:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.301 22:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:39.301 ************************************ 00:05:39.301 START TEST accel_copy_crc32c 00:05:39.301 ************************************ 00:05:39.301 22:05:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:05:39.301 22:05:35 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.301 22:05:35 -- accel/accel.sh@17 -- # local accel_module 00:05:39.301 22:05:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:39.301 22:05:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:39.301 22:05:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.301 22:05:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.301 22:05:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.301 22:05:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.301 22:05:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.301 22:05:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.301 22:05:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.301 22:05:35 -- accel/accel.sh@42 -- # jq -r . 00:05:39.301 [2024-11-17 22:05:35.757411] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.301 [2024-11-17 22:05:35.757499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58688 ] 00:05:39.301 [2024-11-17 22:05:35.890445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.560 [2024-11-17 22:05:35.992099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.937 22:05:37 -- accel/accel.sh@18 -- # out=' 00:05:40.937 SPDK Configuration: 00:05:40.937 Core mask: 0x1 00:05:40.937 00:05:40.937 Accel Perf Configuration: 00:05:40.937 Workload Type: copy_crc32c 00:05:40.937 CRC-32C seed: 0 00:05:40.937 Vector size: 4096 bytes 00:05:40.937 Transfer size: 4096 bytes 00:05:40.937 Vector count 1 00:05:40.937 Module: software 00:05:40.937 Queue depth: 32 00:05:40.937 Allocate depth: 32 00:05:40.937 # threads/core: 1 00:05:40.937 Run time: 1 seconds 00:05:40.937 Verify: Yes 00:05:40.937 00:05:40.937 Running for 1 seconds... 00:05:40.937 00:05:40.937 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:40.937 ------------------------------------------------------------------------------------ 00:05:40.937 0,0 310304/s 1212 MiB/s 0 0 00:05:40.937 ==================================================================================== 00:05:40.937 Total 310304/s 1212 MiB/s 0 0' 00:05:40.937 22:05:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:40.937 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:40.937 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:40.937 22:05:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:40.937 22:05:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.937 22:05:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.937 22:05:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.937 22:05:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.937 22:05:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.937 22:05:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.937 22:05:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.937 22:05:37 -- accel/accel.sh@42 -- # jq -r . 00:05:40.937 [2024-11-17 22:05:37.326036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.937 [2024-11-17 22:05:37.326166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58709 ] 00:05:40.937 [2024-11-17 22:05:37.463197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.937 [2024-11-17 22:05:37.546816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val= 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val= 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val=0x1 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val= 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val= 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val=0 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val= 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val=software 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@23 -- # accel_module=software 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val=32 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val=32 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val=1 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val=Yes 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val= 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:41.196 22:05:37 -- accel/accel.sh@21 -- # val= 00:05:41.196 22:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:41.196 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:42.573 22:05:38 -- accel/accel.sh@21 -- # val= 00:05:42.573 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.573 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:42.573 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:42.574 22:05:38 -- accel/accel.sh@21 -- # val= 00:05:42.574 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:42.574 22:05:38 -- accel/accel.sh@21 -- # val= 00:05:42.574 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:42.574 22:05:38 -- accel/accel.sh@21 -- # val= 00:05:42.574 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:42.574 22:05:38 -- accel/accel.sh@21 -- # val= 00:05:42.574 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:42.574 22:05:38 -- accel/accel.sh@21 -- # val= 00:05:42.574 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:42.574 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:42.574 22:05:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:42.574 22:05:38 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:42.574 22:05:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.574 00:05:42.574 real 0m3.114s 00:05:42.574 user 0m2.642s 00:05:42.574 sys 0m0.269s 00:05:42.574 22:05:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.574 22:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:42.574 ************************************ 00:05:42.574 END TEST accel_copy_crc32c 00:05:42.574 ************************************ 00:05:42.574 22:05:38 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:42.574 22:05:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:42.574 22:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.574 22:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:42.574 ************************************ 00:05:42.574 START TEST accel_copy_crc32c_C2 00:05:42.574 ************************************ 00:05:42.574 22:05:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:42.574 22:05:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.574 22:05:38 -- accel/accel.sh@17 -- # local accel_module 00:05:42.574 22:05:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:42.574 22:05:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:42.574 22:05:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.574 22:05:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.574 22:05:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.574 22:05:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.574 22:05:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.574 22:05:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.574 22:05:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.574 22:05:38 -- accel/accel.sh@42 -- # jq -r . 00:05:42.574 [2024-11-17 22:05:38.932802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.574 [2024-11-17 22:05:38.932901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58742 ] 00:05:42.574 [2024-11-17 22:05:39.066640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.574 [2024-11-17 22:05:39.154865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.948 22:05:40 -- accel/accel.sh@18 -- # out=' 00:05:43.948 SPDK Configuration: 00:05:43.948 Core mask: 0x1 00:05:43.948 00:05:43.948 Accel Perf Configuration: 00:05:43.948 Workload Type: copy_crc32c 00:05:43.948 CRC-32C seed: 0 00:05:43.948 Vector size: 4096 bytes 00:05:43.948 Transfer size: 8192 bytes 00:05:43.948 Vector count 2 00:05:43.948 Module: software 00:05:43.948 Queue depth: 32 00:05:43.948 Allocate depth: 32 00:05:43.948 # threads/core: 1 00:05:43.948 Run time: 1 seconds 00:05:43.948 Verify: Yes 00:05:43.948 00:05:43.948 Running for 1 seconds... 00:05:43.948 00:05:43.948 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:43.948 ------------------------------------------------------------------------------------ 00:05:43.948 0,0 220736/s 1724 MiB/s 0 0 00:05:43.948 ==================================================================================== 00:05:43.948 Total 220736/s 862 MiB/s 0 0' 00:05:43.948 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:43.948 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:43.948 22:05:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:43.948 22:05:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:43.948 22:05:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.948 22:05:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.948 22:05:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.948 22:05:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.948 22:05:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.948 22:05:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.948 22:05:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.948 22:05:40 -- accel/accel.sh@42 -- # jq -r . 00:05:43.948 [2024-11-17 22:05:40.493382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.948 [2024-11-17 22:05:40.493691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58763 ] 00:05:44.207 [2024-11-17 22:05:40.622249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.207 [2024-11-17 22:05:40.701140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val= 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val= 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val=0x1 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val= 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val= 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val=0 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val= 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val=software 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@23 -- # accel_module=software 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val=32 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val=32 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val=1 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val=Yes 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val= 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:44.207 22:05:40 -- accel/accel.sh@21 -- # val= 00:05:44.207 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:44.207 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:45.583 22:05:41 -- accel/accel.sh@21 -- # val= 00:05:45.583 22:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.583 22:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:45.583 22:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:45.583 22:05:41 -- accel/accel.sh@21 -- # val= 00:05:45.583 22:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.583 22:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:45.583 22:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:45.583 22:05:42 -- accel/accel.sh@21 -- # val= 00:05:45.583 22:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.583 22:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:45.583 22:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:45.583 22:05:42 -- accel/accel.sh@21 -- # val= 00:05:45.583 22:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.583 22:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:45.583 22:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:45.583 22:05:42 -- accel/accel.sh@21 -- # val= 00:05:45.583 22:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.583 22:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:45.583 22:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:45.583 22:05:42 -- accel/accel.sh@21 -- # val= 00:05:45.583 22:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.583 22:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:45.583 22:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:45.583 22:05:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:45.583 22:05:42 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:45.583 22:05:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.583 00:05:45.583 real 0m3.099s 00:05:45.583 user 0m2.625s 00:05:45.583 sys 0m0.271s 00:05:45.583 22:05:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.583 ************************************ 00:05:45.583 END TEST accel_copy_crc32c_C2 00:05:45.583 ************************************ 00:05:45.583 22:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:45.583 22:05:42 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:45.583 22:05:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:45.583 22:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.583 22:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:45.583 ************************************ 00:05:45.583 START TEST accel_dualcast 00:05:45.583 ************************************ 00:05:45.583 22:05:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:05:45.583 22:05:42 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.583 22:05:42 -- accel/accel.sh@17 -- # local accel_module 00:05:45.583 22:05:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:45.583 22:05:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:45.583 22:05:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.583 22:05:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.583 22:05:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.583 22:05:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.583 22:05:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.583 22:05:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.583 22:05:42 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.583 22:05:42 -- accel/accel.sh@42 -- # jq -r . 00:05:45.583 [2024-11-17 22:05:42.072757] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.583 [2024-11-17 22:05:42.072887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58796 ] 00:05:45.841 [2024-11-17 22:05:42.207751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.842 [2024-11-17 22:05:42.285661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.216 22:05:43 -- accel/accel.sh@18 -- # out=' 00:05:47.216 SPDK Configuration: 00:05:47.216 Core mask: 0x1 00:05:47.216 00:05:47.216 Accel Perf Configuration: 00:05:47.216 Workload Type: dualcast 00:05:47.216 Transfer size: 4096 bytes 00:05:47.216 Vector count 1 00:05:47.216 Module: software 00:05:47.216 Queue depth: 32 00:05:47.216 Allocate depth: 32 00:05:47.216 # threads/core: 1 00:05:47.216 Run time: 1 seconds 00:05:47.216 Verify: Yes 00:05:47.216 00:05:47.216 Running for 1 seconds... 00:05:47.216 00:05:47.216 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:47.216 ------------------------------------------------------------------------------------ 00:05:47.216 0,0 426048/s 1664 MiB/s 0 0 00:05:47.216 ==================================================================================== 00:05:47.216 Total 426048/s 1664 MiB/s 0 0' 00:05:47.216 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.216 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.216 22:05:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:47.216 22:05:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:47.216 22:05:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.217 22:05:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.217 22:05:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.217 22:05:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.217 22:05:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.217 22:05:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.217 22:05:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.217 22:05:43 -- accel/accel.sh@42 -- # jq -r . 00:05:47.217 [2024-11-17 22:05:43.620418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.217 [2024-11-17 22:05:43.621062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58817 ] 00:05:47.217 [2024-11-17 22:05:43.751945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.475 [2024-11-17 22:05:43.829851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val= 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val= 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val=0x1 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val= 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val= 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val=dualcast 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val= 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val=software 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@23 -- # accel_module=software 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val=32 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val=32 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val=1 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:47.475 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.475 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.475 22:05:43 -- accel/accel.sh@21 -- # val=Yes 00:05:47.476 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.476 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.476 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.476 22:05:43 -- accel/accel.sh@21 -- # val= 00:05:47.476 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.476 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.476 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:47.476 22:05:43 -- accel/accel.sh@21 -- # val= 00:05:47.476 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.476 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:05:47.476 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:05:48.874 22:05:45 -- accel/accel.sh@21 -- # val= 00:05:48.874 22:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.874 22:05:45 -- accel/accel.sh@21 -- # val= 00:05:48.874 22:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.874 22:05:45 -- accel/accel.sh@21 -- # val= 00:05:48.874 22:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.874 22:05:45 -- accel/accel.sh@21 -- # val= 00:05:48.874 22:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.874 22:05:45 -- accel/accel.sh@21 -- # val= 00:05:48.874 22:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.874 22:05:45 -- accel/accel.sh@21 -- # val= 00:05:48.874 22:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:48.874 22:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:48.874 22:05:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:48.874 22:05:45 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:48.874 22:05:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.874 00:05:48.874 real 0m3.096s 00:05:48.874 user 0m2.640s 00:05:48.874 sys 0m0.248s 00:05:48.874 22:05:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.874 ************************************ 00:05:48.874 END TEST accel_dualcast 00:05:48.874 ************************************ 00:05:48.874 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:48.874 22:05:45 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:48.874 22:05:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:48.874 22:05:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.874 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:48.874 ************************************ 00:05:48.874 START TEST accel_compare 00:05:48.874 ************************************ 00:05:48.874 22:05:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:05:48.874 22:05:45 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.874 22:05:45 -- accel/accel.sh@17 -- # local accel_module 00:05:48.874 22:05:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:48.874 22:05:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:48.874 22:05:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.874 22:05:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.874 22:05:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.874 22:05:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.874 22:05:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.874 22:05:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.874 22:05:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.874 22:05:45 -- accel/accel.sh@42 -- # jq -r . 00:05:48.874 [2024-11-17 22:05:45.228275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.874 [2024-11-17 22:05:45.228383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:05:48.874 [2024-11-17 22:05:45.367541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.133 [2024-11-17 22:05:45.497531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.511 22:05:46 -- accel/accel.sh@18 -- # out=' 00:05:50.511 SPDK Configuration: 00:05:50.511 Core mask: 0x1 00:05:50.511 00:05:50.511 Accel Perf Configuration: 00:05:50.511 Workload Type: compare 00:05:50.511 Transfer size: 4096 bytes 00:05:50.511 Vector count 1 00:05:50.511 Module: software 00:05:50.511 Queue depth: 32 00:05:50.511 Allocate depth: 32 00:05:50.511 # threads/core: 1 00:05:50.511 Run time: 1 seconds 00:05:50.511 Verify: Yes 00:05:50.511 00:05:50.511 Running for 1 seconds... 00:05:50.511 00:05:50.511 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:50.511 ------------------------------------------------------------------------------------ 00:05:50.511 0,0 542688/s 2119 MiB/s 0 0 00:05:50.511 ==================================================================================== 00:05:50.511 Total 542688/s 2119 MiB/s 0 0' 00:05:50.511 22:05:46 -- accel/accel.sh@20 -- # IFS=: 00:05:50.511 22:05:46 -- accel/accel.sh@20 -- # read -r var val 00:05:50.511 22:05:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:50.511 22:05:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.511 22:05:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:50.511 22:05:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.511 22:05:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.511 22:05:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.511 22:05:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.511 22:05:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.511 22:05:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.511 22:05:46 -- accel/accel.sh@42 -- # jq -r . 00:05:50.511 [2024-11-17 22:05:46.849193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.511 [2024-11-17 22:05:46.849292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58871 ] 00:05:50.511 [2024-11-17 22:05:46.977918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.511 [2024-11-17 22:05:47.088507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val= 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val= 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val=0x1 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val= 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val= 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val=compare 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val= 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val=software 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@23 -- # accel_module=software 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val=32 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val=32 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.770 22:05:47 -- accel/accel.sh@21 -- # val=1 00:05:50.770 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.770 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.771 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.771 22:05:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:50.771 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.771 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.771 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.771 22:05:47 -- accel/accel.sh@21 -- # val=Yes 00:05:50.771 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.771 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.771 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.771 22:05:47 -- accel/accel.sh@21 -- # val= 00:05:50.771 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.771 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.771 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:50.771 22:05:47 -- accel/accel.sh@21 -- # val= 00:05:50.771 22:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.771 22:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:50.771 22:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:52.148 22:05:48 -- accel/accel.sh@21 -- # val= 00:05:52.148 22:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # IFS=: 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # read -r var val 00:05:52.148 22:05:48 -- accel/accel.sh@21 -- # val= 00:05:52.148 22:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # IFS=: 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # read -r var val 00:05:52.148 22:05:48 -- accel/accel.sh@21 -- # val= 00:05:52.148 22:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # IFS=: 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # read -r var val 00:05:52.148 22:05:48 -- accel/accel.sh@21 -- # val= 00:05:52.148 22:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # IFS=: 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # read -r var val 00:05:52.148 22:05:48 -- accel/accel.sh@21 -- # val= 00:05:52.148 22:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # IFS=: 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # read -r var val 00:05:52.148 22:05:48 -- accel/accel.sh@21 -- # val= 00:05:52.148 22:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # IFS=: 00:05:52.148 22:05:48 -- accel/accel.sh@20 -- # read -r var val 00:05:52.148 22:05:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.148 22:05:48 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:52.148 22:05:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.148 00:05:52.148 real 0m3.218s 00:05:52.148 user 0m2.720s 00:05:52.148 sys 0m0.291s 00:05:52.149 22:05:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.149 22:05:48 -- common/autotest_common.sh@10 -- # set +x 00:05:52.149 ************************************ 00:05:52.149 END TEST accel_compare 00:05:52.149 ************************************ 00:05:52.149 22:05:48 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:52.149 22:05:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:52.149 22:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.149 22:05:48 -- common/autotest_common.sh@10 -- # set +x 00:05:52.149 ************************************ 00:05:52.149 START TEST accel_xor 00:05:52.149 ************************************ 00:05:52.149 22:05:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:05:52.149 22:05:48 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.149 22:05:48 -- accel/accel.sh@17 -- # local accel_module 00:05:52.149 22:05:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:52.149 22:05:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:52.149 22:05:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.149 22:05:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.149 22:05:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.149 22:05:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.149 22:05:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.149 22:05:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.149 22:05:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.149 22:05:48 -- accel/accel.sh@42 -- # jq -r . 00:05:52.149 [2024-11-17 22:05:48.494247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.149 [2024-11-17 22:05:48.494357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58905 ] 00:05:52.149 [2024-11-17 22:05:48.634421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.149 [2024-11-17 22:05:48.733896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.527 22:05:50 -- accel/accel.sh@18 -- # out=' 00:05:53.527 SPDK Configuration: 00:05:53.527 Core mask: 0x1 00:05:53.527 00:05:53.527 Accel Perf Configuration: 00:05:53.527 Workload Type: xor 00:05:53.527 Source buffers: 2 00:05:53.527 Transfer size: 4096 bytes 00:05:53.527 Vector count 1 00:05:53.527 Module: software 00:05:53.527 Queue depth: 32 00:05:53.527 Allocate depth: 32 00:05:53.527 # threads/core: 1 00:05:53.527 Run time: 1 seconds 00:05:53.527 Verify: Yes 00:05:53.527 00:05:53.527 Running for 1 seconds... 00:05:53.527 00:05:53.527 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:53.527 ------------------------------------------------------------------------------------ 00:05:53.527 0,0 260832/s 1018 MiB/s 0 0 00:05:53.527 ==================================================================================== 00:05:53.527 Total 260832/s 1018 MiB/s 0 0' 00:05:53.527 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.527 22:05:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:53.527 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.527 22:05:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:53.527 22:05:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.527 22:05:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.527 22:05:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.527 22:05:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.527 22:05:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.527 22:05:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.527 22:05:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.527 22:05:50 -- accel/accel.sh@42 -- # jq -r . 00:05:53.527 [2024-11-17 22:05:50.084510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.527 [2024-11-17 22:05:50.084624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58925 ] 00:05:53.787 [2024-11-17 22:05:50.220077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.787 [2024-11-17 22:05:50.311516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val= 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val= 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val=0x1 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val= 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val= 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val=xor 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val=2 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val= 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val=software 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@23 -- # accel_module=software 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val=32 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val=32 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val=1 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val=Yes 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val= 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:53.787 22:05:50 -- accel/accel.sh@21 -- # val= 00:05:53.787 22:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:53.787 22:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:55.164 22:05:51 -- accel/accel.sh@21 -- # val= 00:05:55.164 22:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:55.164 22:05:51 -- accel/accel.sh@21 -- # val= 00:05:55.164 22:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:55.164 22:05:51 -- accel/accel.sh@21 -- # val= 00:05:55.164 22:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:55.164 22:05:51 -- accel/accel.sh@21 -- # val= 00:05:55.164 22:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:55.164 22:05:51 -- accel/accel.sh@21 -- # val= 00:05:55.164 22:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:55.164 22:05:51 -- accel/accel.sh@21 -- # val= 00:05:55.164 22:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:55.164 22:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:55.164 22:05:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.164 22:05:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:55.164 22:05:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.164 00:05:55.164 real 0m3.160s 00:05:55.164 user 0m2.673s 00:05:55.164 sys 0m0.286s 00:05:55.164 22:05:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.164 22:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:55.164 ************************************ 00:05:55.164 END TEST accel_xor 00:05:55.164 ************************************ 00:05:55.164 22:05:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:55.164 22:05:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:55.164 22:05:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.164 22:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:55.164 ************************************ 00:05:55.164 START TEST accel_xor 00:05:55.164 ************************************ 00:05:55.164 22:05:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:05:55.164 22:05:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.164 22:05:51 -- accel/accel.sh@17 -- # local accel_module 00:05:55.164 22:05:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:55.164 22:05:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:55.164 22:05:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.164 22:05:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.164 22:05:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.164 22:05:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.164 22:05:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.164 22:05:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.164 22:05:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.164 22:05:51 -- accel/accel.sh@42 -- # jq -r . 00:05:55.164 [2024-11-17 22:05:51.705471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.164 [2024-11-17 22:05:51.705580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58965 ] 00:05:55.423 [2024-11-17 22:05:51.840869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.423 [2024-11-17 22:05:51.929762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.799 22:05:53 -- accel/accel.sh@18 -- # out=' 00:05:56.799 SPDK Configuration: 00:05:56.799 Core mask: 0x1 00:05:56.799 00:05:56.799 Accel Perf Configuration: 00:05:56.799 Workload Type: xor 00:05:56.799 Source buffers: 3 00:05:56.799 Transfer size: 4096 bytes 00:05:56.799 Vector count 1 00:05:56.799 Module: software 00:05:56.799 Queue depth: 32 00:05:56.799 Allocate depth: 32 00:05:56.799 # threads/core: 1 00:05:56.799 Run time: 1 seconds 00:05:56.799 Verify: Yes 00:05:56.799 00:05:56.799 Running for 1 seconds... 00:05:56.799 00:05:56.799 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.799 ------------------------------------------------------------------------------------ 00:05:56.799 0,0 251872/s 983 MiB/s 0 0 00:05:56.799 ==================================================================================== 00:05:56.799 Total 251872/s 983 MiB/s 0 0' 00:05:56.799 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:56.799 22:05:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:56.799 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:56.799 22:05:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:56.799 22:05:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.799 22:05:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.799 22:05:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.799 22:05:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.799 22:05:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.799 22:05:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.799 22:05:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.799 22:05:53 -- accel/accel.sh@42 -- # jq -r . 00:05:56.799 [2024-11-17 22:05:53.269421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.799 [2024-11-17 22:05:53.269785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58979 ] 00:05:56.799 [2024-11-17 22:05:53.406139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.059 [2024-11-17 22:05:53.495211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val= 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val= 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val=0x1 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val= 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val= 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val=xor 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val=3 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val= 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val=software 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val=32 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val=32 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val=1 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val=Yes 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val= 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:57.059 22:05:53 -- accel/accel.sh@21 -- # val= 00:05:57.059 22:05:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:57.059 22:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.436 22:05:54 -- accel/accel.sh@21 -- # val= 00:05:58.436 22:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:58.436 22:05:54 -- accel/accel.sh@21 -- # val= 00:05:58.436 22:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:58.436 22:05:54 -- accel/accel.sh@21 -- # val= 00:05:58.436 22:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:58.436 22:05:54 -- accel/accel.sh@21 -- # val= 00:05:58.436 22:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:58.436 22:05:54 -- accel/accel.sh@21 -- # val= 00:05:58.436 22:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:58.436 22:05:54 -- accel/accel.sh@21 -- # val= 00:05:58.436 22:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:58.436 22:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:58.436 22:05:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.436 22:05:54 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:58.436 22:05:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.436 ************************************ 00:05:58.436 END TEST accel_xor 00:05:58.436 ************************************ 00:05:58.436 00:05:58.436 real 0m3.137s 00:05:58.436 user 0m2.654s 00:05:58.436 sys 0m0.280s 00:05:58.436 22:05:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.436 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.436 22:05:54 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:58.436 22:05:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:58.436 22:05:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.436 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.436 ************************************ 00:05:58.436 START TEST accel_dif_verify 00:05:58.436 ************************************ 00:05:58.436 22:05:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:05:58.436 22:05:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.436 22:05:54 -- accel/accel.sh@17 -- # local accel_module 00:05:58.436 22:05:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:58.436 22:05:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:58.436 22:05:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.436 22:05:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.436 22:05:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.436 22:05:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.436 22:05:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.436 22:05:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.436 22:05:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.436 22:05:54 -- accel/accel.sh@42 -- # jq -r . 00:05:58.436 [2024-11-17 22:05:54.889841] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.436 [2024-11-17 22:05:54.890099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59019 ] 00:05:58.436 [2024-11-17 22:05:55.026995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.695 [2024-11-17 22:05:55.111547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.072 22:05:56 -- accel/accel.sh@18 -- # out=' 00:06:00.072 SPDK Configuration: 00:06:00.072 Core mask: 0x1 00:06:00.072 00:06:00.072 Accel Perf Configuration: 00:06:00.072 Workload Type: dif_verify 00:06:00.072 Vector size: 4096 bytes 00:06:00.072 Transfer size: 4096 bytes 00:06:00.072 Block size: 512 bytes 00:06:00.072 Metadata size: 8 bytes 00:06:00.072 Vector count 1 00:06:00.072 Module: software 00:06:00.072 Queue depth: 32 00:06:00.072 Allocate depth: 32 00:06:00.072 # threads/core: 1 00:06:00.072 Run time: 1 seconds 00:06:00.072 Verify: No 00:06:00.072 00:06:00.072 Running for 1 seconds... 00:06:00.072 00:06:00.072 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.072 ------------------------------------------------------------------------------------ 00:06:00.072 0,0 125376/s 497 MiB/s 0 0 00:06:00.072 ==================================================================================== 00:06:00.072 Total 125376/s 489 MiB/s 0 0' 00:06:00.072 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.072 22:05:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:00.072 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.072 22:05:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.072 22:05:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:00.072 22:05:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.072 22:05:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.072 22:05:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.072 22:05:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.072 22:05:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.072 22:05:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.072 22:05:56 -- accel/accel.sh@42 -- # jq -r . 00:06:00.072 [2024-11-17 22:05:56.448967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.072 [2024-11-17 22:05:56.449252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59033 ] 00:06:00.072 [2024-11-17 22:05:56.577925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.072 [2024-11-17 22:05:56.661679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val= 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val= 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val=0x1 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val= 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val= 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val=dif_verify 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val= 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val=software 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val=32 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val=32 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val=1 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val=No 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val= 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:00.331 22:05:56 -- accel/accel.sh@21 -- # val= 00:06:00.331 22:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # IFS=: 00:06:00.331 22:05:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.706 22:05:57 -- accel/accel.sh@21 -- # val= 00:06:01.706 22:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.706 22:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:01.706 22:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:01.706 22:05:57 -- accel/accel.sh@21 -- # val= 00:06:01.706 22:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.706 22:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:01.706 22:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:01.706 22:05:57 -- accel/accel.sh@21 -- # val= 00:06:01.706 22:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.706 22:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:01.706 22:05:57 -- accel/accel.sh@20 -- # read -r var val 00:06:01.706 22:05:57 -- accel/accel.sh@21 -- # val= 00:06:01.706 22:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.706 22:05:57 -- accel/accel.sh@20 -- # IFS=: 00:06:01.706 22:05:58 -- accel/accel.sh@20 -- # read -r var val 00:06:01.706 22:05:58 -- accel/accel.sh@21 -- # val= 00:06:01.706 22:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.706 22:05:58 -- accel/accel.sh@20 -- # IFS=: 00:06:01.706 22:05:58 -- accel/accel.sh@20 -- # read -r var val 00:06:01.706 22:05:58 -- accel/accel.sh@21 -- # val= 00:06:01.706 22:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.706 22:05:58 -- accel/accel.sh@20 -- # IFS=: 00:06:01.706 22:05:58 -- accel/accel.sh@20 -- # read -r var val 00:06:01.706 22:05:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.706 22:05:58 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:01.706 22:05:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.706 00:06:01.706 real 0m3.137s 00:06:01.706 user 0m2.659s 00:06:01.706 sys 0m0.276s 00:06:01.706 22:05:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.706 22:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.706 ************************************ 00:06:01.706 END TEST accel_dif_verify 00:06:01.706 ************************************ 00:06:01.706 22:05:58 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:01.706 22:05:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:01.706 22:05:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.706 22:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.706 ************************************ 00:06:01.706 START TEST accel_dif_generate 00:06:01.706 ************************************ 00:06:01.706 22:05:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:01.706 22:05:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.706 22:05:58 -- accel/accel.sh@17 -- # local accel_module 00:06:01.706 22:05:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:01.706 22:05:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:01.706 22:05:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.706 22:05:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.706 22:05:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.706 22:05:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.706 22:05:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.706 22:05:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.706 22:05:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.706 22:05:58 -- accel/accel.sh@42 -- # jq -r . 00:06:01.706 [2024-11-17 22:05:58.085325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.706 [2024-11-17 22:05:58.085568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59073 ] 00:06:01.706 [2024-11-17 22:05:58.222995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.965 [2024-11-17 22:05:58.342519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.343 22:05:59 -- accel/accel.sh@18 -- # out=' 00:06:03.343 SPDK Configuration: 00:06:03.343 Core mask: 0x1 00:06:03.343 00:06:03.343 Accel Perf Configuration: 00:06:03.343 Workload Type: dif_generate 00:06:03.343 Vector size: 4096 bytes 00:06:03.343 Transfer size: 4096 bytes 00:06:03.343 Block size: 512 bytes 00:06:03.343 Metadata size: 8 bytes 00:06:03.343 Vector count 1 00:06:03.343 Module: software 00:06:03.343 Queue depth: 32 00:06:03.343 Allocate depth: 32 00:06:03.343 # threads/core: 1 00:06:03.343 Run time: 1 seconds 00:06:03.343 Verify: No 00:06:03.343 00:06:03.343 Running for 1 seconds... 00:06:03.343 00:06:03.343 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.343 ------------------------------------------------------------------------------------ 00:06:03.343 0,0 149216/s 591 MiB/s 0 0 00:06:03.343 ==================================================================================== 00:06:03.343 Total 149216/s 582 MiB/s 0 0' 00:06:03.343 22:05:59 -- accel/accel.sh@20 -- # IFS=: 00:06:03.343 22:05:59 -- accel/accel.sh@20 -- # read -r var val 00:06:03.343 22:05:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:03.343 22:05:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:03.343 22:05:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.343 22:05:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.343 22:05:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.343 22:05:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.343 22:05:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.343 22:05:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.343 22:05:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.343 22:05:59 -- accel/accel.sh@42 -- # jq -r . 00:06:03.343 [2024-11-17 22:05:59.695266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.343 [2024-11-17 22:05:59.695362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59087 ] 00:06:03.343 [2024-11-17 22:05:59.825477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.343 [2024-11-17 22:05:59.956058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.602 22:06:00 -- accel/accel.sh@21 -- # val= 00:06:03.602 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.602 22:06:00 -- accel/accel.sh@21 -- # val= 00:06:03.602 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.602 22:06:00 -- accel/accel.sh@21 -- # val=0x1 00:06:03.602 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.602 22:06:00 -- accel/accel.sh@21 -- # val= 00:06:03.602 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.602 22:06:00 -- accel/accel.sh@21 -- # val= 00:06:03.602 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.602 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.602 22:06:00 -- accel/accel.sh@21 -- # val=dif_generate 00:06:03.602 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val= 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val=software 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val=32 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val=32 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val=1 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val=No 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val= 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:03.603 22:06:00 -- accel/accel.sh@21 -- # val= 00:06:03.603 22:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # IFS=: 00:06:03.603 22:06:00 -- accel/accel.sh@20 -- # read -r var val 00:06:04.981 22:06:01 -- accel/accel.sh@21 -- # val= 00:06:04.981 22:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:04.981 22:06:01 -- accel/accel.sh@21 -- # val= 00:06:04.981 22:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:04.981 22:06:01 -- accel/accel.sh@21 -- # val= 00:06:04.981 22:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:04.981 22:06:01 -- accel/accel.sh@21 -- # val= 00:06:04.981 22:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:04.981 22:06:01 -- accel/accel.sh@21 -- # val= 00:06:04.981 22:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:04.981 22:06:01 -- accel/accel.sh@21 -- # val= 00:06:04.981 22:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # IFS=: 00:06:04.981 22:06:01 -- accel/accel.sh@20 -- # read -r var val 00:06:04.981 22:06:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.981 22:06:01 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:04.981 ************************************ 00:06:04.981 END TEST accel_dif_generate 00:06:04.981 ************************************ 00:06:04.981 22:06:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.981 00:06:04.981 real 0m3.212s 00:06:04.981 user 0m2.739s 00:06:04.981 sys 0m0.271s 00:06:04.981 22:06:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.981 22:06:01 -- common/autotest_common.sh@10 -- # set +x 00:06:04.981 22:06:01 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:04.981 22:06:01 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:04.981 22:06:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.981 22:06:01 -- common/autotest_common.sh@10 -- # set +x 00:06:04.981 ************************************ 00:06:04.981 START TEST accel_dif_generate_copy 00:06:04.981 ************************************ 00:06:04.981 22:06:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:04.981 22:06:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.981 22:06:01 -- accel/accel.sh@17 -- # local accel_module 00:06:04.981 22:06:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:04.981 22:06:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:04.981 22:06:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.981 22:06:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.981 22:06:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.981 22:06:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.981 22:06:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.981 22:06:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.981 22:06:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.981 22:06:01 -- accel/accel.sh@42 -- # jq -r . 00:06:04.981 [2024-11-17 22:06:01.353975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.981 [2024-11-17 22:06:01.354088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59127 ] 00:06:04.981 [2024-11-17 22:06:01.491114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.981 [2024-11-17 22:06:01.577838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.358 22:06:02 -- accel/accel.sh@18 -- # out=' 00:06:06.358 SPDK Configuration: 00:06:06.358 Core mask: 0x1 00:06:06.358 00:06:06.358 Accel Perf Configuration: 00:06:06.358 Workload Type: dif_generate_copy 00:06:06.358 Vector size: 4096 bytes 00:06:06.358 Transfer size: 4096 bytes 00:06:06.358 Vector count 1 00:06:06.358 Module: software 00:06:06.358 Queue depth: 32 00:06:06.358 Allocate depth: 32 00:06:06.358 # threads/core: 1 00:06:06.358 Run time: 1 seconds 00:06:06.358 Verify: No 00:06:06.358 00:06:06.358 Running for 1 seconds... 00:06:06.358 00:06:06.358 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.358 ------------------------------------------------------------------------------------ 00:06:06.358 0,0 117376/s 465 MiB/s 0 0 00:06:06.358 ==================================================================================== 00:06:06.358 Total 117376/s 458 MiB/s 0 0' 00:06:06.358 22:06:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:06.358 22:06:02 -- accel/accel.sh@20 -- # IFS=: 00:06:06.358 22:06:02 -- accel/accel.sh@20 -- # read -r var val 00:06:06.358 22:06:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:06.358 22:06:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.358 22:06:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.358 22:06:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.358 22:06:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.358 22:06:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.358 22:06:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.358 22:06:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.358 22:06:02 -- accel/accel.sh@42 -- # jq -r . 00:06:06.358 [2024-11-17 22:06:02.914658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.358 [2024-11-17 22:06:02.914996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59147 ] 00:06:06.617 [2024-11-17 22:06:03.051645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.617 [2024-11-17 22:06:03.138630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val= 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val= 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val=0x1 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val= 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val= 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val= 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val=software 00:06:06.617 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.617 22:06:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.617 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.617 22:06:03 -- accel/accel.sh@21 -- # val=32 00:06:06.618 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.618 22:06:03 -- accel/accel.sh@21 -- # val=32 00:06:06.618 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.618 22:06:03 -- accel/accel.sh@21 -- # val=1 00:06:06.618 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.618 22:06:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.618 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.618 22:06:03 -- accel/accel.sh@21 -- # val=No 00:06:06.618 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.618 22:06:03 -- accel/accel.sh@21 -- # val= 00:06:06.618 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:06.618 22:06:03 -- accel/accel.sh@21 -- # val= 00:06:06.618 22:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # IFS=: 00:06:06.618 22:06:03 -- accel/accel.sh@20 -- # read -r var val 00:06:07.995 22:06:04 -- accel/accel.sh@21 -- # val= 00:06:07.995 22:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:07.995 22:06:04 -- accel/accel.sh@21 -- # val= 00:06:07.995 22:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:07.995 22:06:04 -- accel/accel.sh@21 -- # val= 00:06:07.995 22:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:07.995 22:06:04 -- accel/accel.sh@21 -- # val= 00:06:07.995 22:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:07.995 22:06:04 -- accel/accel.sh@21 -- # val= 00:06:07.995 22:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:07.995 22:06:04 -- accel/accel.sh@21 -- # val= 00:06:07.995 22:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # IFS=: 00:06:07.995 22:06:04 -- accel/accel.sh@20 -- # read -r var val 00:06:07.995 22:06:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:07.995 ************************************ 00:06:07.995 END TEST accel_dif_generate_copy 00:06:07.995 ************************************ 00:06:07.995 22:06:04 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:07.995 22:06:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.995 00:06:07.995 real 0m3.124s 00:06:07.995 user 0m2.642s 00:06:07.995 sys 0m0.277s 00:06:07.995 22:06:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.995 22:06:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.995 22:06:04 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:07.995 22:06:04 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.995 22:06:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:07.995 22:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.995 22:06:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.995 ************************************ 00:06:07.995 START TEST accel_comp 00:06:07.995 ************************************ 00:06:07.995 22:06:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.995 22:06:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.995 22:06:04 -- accel/accel.sh@17 -- # local accel_module 00:06:07.995 22:06:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.996 22:06:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.996 22:06:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.996 22:06:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.996 22:06:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.996 22:06:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.996 22:06:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.996 22:06:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.996 22:06:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.996 22:06:04 -- accel/accel.sh@42 -- # jq -r . 00:06:07.996 [2024-11-17 22:06:04.524141] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.996 [2024-11-17 22:06:04.524424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59181 ] 00:06:08.254 [2024-11-17 22:06:04.659898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.254 [2024-11-17 22:06:04.739468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.632 22:06:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:09.632 00:06:09.632 SPDK Configuration: 00:06:09.632 Core mask: 0x1 00:06:09.632 00:06:09.632 Accel Perf Configuration: 00:06:09.632 Workload Type: compress 00:06:09.632 Transfer size: 4096 bytes 00:06:09.632 Vector count 1 00:06:09.632 Module: software 00:06:09.632 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.632 Queue depth: 32 00:06:09.632 Allocate depth: 32 00:06:09.632 # threads/core: 1 00:06:09.632 Run time: 1 seconds 00:06:09.632 Verify: No 00:06:09.632 00:06:09.632 Running for 1 seconds... 00:06:09.632 00:06:09.632 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.632 ------------------------------------------------------------------------------------ 00:06:09.632 0,0 59584/s 248 MiB/s 0 0 00:06:09.632 ==================================================================================== 00:06:09.632 Total 59584/s 232 MiB/s 0 0' 00:06:09.632 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.632 22:06:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.632 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.632 22:06:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.632 22:06:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.632 22:06:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.632 22:06:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.632 22:06:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.632 22:06:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.632 22:06:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.632 22:06:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.632 22:06:06 -- accel/accel.sh@42 -- # jq -r . 00:06:09.632 [2024-11-17 22:06:06.063597] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.632 [2024-11-17 22:06:06.063679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59201 ] 00:06:09.632 [2024-11-17 22:06:06.198213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.892 [2024-11-17 22:06:06.286536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val= 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val= 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val= 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val=0x1 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val= 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val= 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val=compress 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val= 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val=software 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val=32 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val=32 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val=1 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val=No 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val= 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:09.892 22:06:06 -- accel/accel.sh@21 -- # val= 00:06:09.892 22:06:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # IFS=: 00:06:09.892 22:06:06 -- accel/accel.sh@20 -- # read -r var val 00:06:11.269 22:06:07 -- accel/accel.sh@21 -- # val= 00:06:11.269 22:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # IFS=: 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # read -r var val 00:06:11.269 22:06:07 -- accel/accel.sh@21 -- # val= 00:06:11.269 22:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # IFS=: 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # read -r var val 00:06:11.269 22:06:07 -- accel/accel.sh@21 -- # val= 00:06:11.269 22:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # IFS=: 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # read -r var val 00:06:11.269 22:06:07 -- accel/accel.sh@21 -- # val= 00:06:11.269 22:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # IFS=: 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # read -r var val 00:06:11.269 22:06:07 -- accel/accel.sh@21 -- # val= 00:06:11.269 22:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # IFS=: 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # read -r var val 00:06:11.269 22:06:07 -- accel/accel.sh@21 -- # val= 00:06:11.269 22:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # IFS=: 00:06:11.269 22:06:07 -- accel/accel.sh@20 -- # read -r var val 00:06:11.269 22:06:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.269 22:06:07 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:11.269 22:06:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.269 00:06:11.269 real 0m3.100s 00:06:11.269 user 0m2.636s 00:06:11.269 sys 0m0.263s 00:06:11.269 22:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.269 22:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:11.269 ************************************ 00:06:11.269 END TEST accel_comp 00:06:11.269 ************************************ 00:06:11.269 22:06:07 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:11.269 22:06:07 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:11.269 22:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.269 22:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:11.269 ************************************ 00:06:11.269 START TEST accel_decomp 00:06:11.269 ************************************ 00:06:11.269 22:06:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:11.269 22:06:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.269 22:06:07 -- accel/accel.sh@17 -- # local accel_module 00:06:11.269 22:06:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:11.269 22:06:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:11.269 22:06:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.269 22:06:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.269 22:06:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.269 22:06:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.269 22:06:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.269 22:06:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.269 22:06:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.269 22:06:07 -- accel/accel.sh@42 -- # jq -r . 00:06:11.269 [2024-11-17 22:06:07.675427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.269 [2024-11-17 22:06:07.675520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59237 ] 00:06:11.269 [2024-11-17 22:06:07.813943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.529 [2024-11-17 22:06:07.905443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.926 22:06:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:12.926 00:06:12.926 SPDK Configuration: 00:06:12.926 Core mask: 0x1 00:06:12.926 00:06:12.926 Accel Perf Configuration: 00:06:12.926 Workload Type: decompress 00:06:12.926 Transfer size: 4096 bytes 00:06:12.926 Vector count 1 00:06:12.926 Module: software 00:06:12.926 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.926 Queue depth: 32 00:06:12.926 Allocate depth: 32 00:06:12.926 # threads/core: 1 00:06:12.926 Run time: 1 seconds 00:06:12.926 Verify: Yes 00:06:12.926 00:06:12.926 Running for 1 seconds... 00:06:12.926 00:06:12.926 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.926 ------------------------------------------------------------------------------------ 00:06:12.926 0,0 83840/s 154 MiB/s 0 0 00:06:12.926 ==================================================================================== 00:06:12.926 Total 83840/s 327 MiB/s 0 0' 00:06:12.926 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.926 22:06:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:12.926 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.926 22:06:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:12.926 22:06:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.926 22:06:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.926 22:06:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.926 22:06:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.926 22:06:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.926 22:06:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.926 22:06:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.926 22:06:09 -- accel/accel.sh@42 -- # jq -r . 00:06:12.926 [2024-11-17 22:06:09.256581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.926 [2024-11-17 22:06:09.256828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59257 ] 00:06:12.926 [2024-11-17 22:06:09.388220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.926 [2024-11-17 22:06:09.482193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val= 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val= 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val= 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val=0x1 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val= 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val= 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val=decompress 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val= 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val=software 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val=32 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val=32 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val=1 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val=Yes 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val= 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.185 22:06:09 -- accel/accel.sh@21 -- # val= 00:06:13.185 22:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:13.185 22:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.564 22:06:10 -- accel/accel.sh@21 -- # val= 00:06:14.564 22:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:14.564 22:06:10 -- accel/accel.sh@21 -- # val= 00:06:14.564 22:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:14.564 22:06:10 -- accel/accel.sh@21 -- # val= 00:06:14.564 22:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:14.564 22:06:10 -- accel/accel.sh@21 -- # val= 00:06:14.564 22:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:14.564 22:06:10 -- accel/accel.sh@21 -- # val= 00:06:14.564 22:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:14.564 ************************************ 00:06:14.564 END TEST accel_decomp 00:06:14.564 ************************************ 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:14.564 22:06:10 -- accel/accel.sh@21 -- # val= 00:06:14.564 22:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:14.564 22:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:14.564 22:06:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.564 22:06:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:14.564 22:06:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.564 00:06:14.564 real 0m3.165s 00:06:14.564 user 0m2.670s 00:06:14.564 sys 0m0.286s 00:06:14.564 22:06:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.564 22:06:10 -- common/autotest_common.sh@10 -- # set +x 00:06:14.564 22:06:10 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:14.564 22:06:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:14.564 22:06:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.564 22:06:10 -- common/autotest_common.sh@10 -- # set +x 00:06:14.564 ************************************ 00:06:14.564 START TEST accel_decmop_full 00:06:14.564 ************************************ 00:06:14.564 22:06:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:14.564 22:06:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.564 22:06:10 -- accel/accel.sh@17 -- # local accel_module 00:06:14.564 22:06:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:14.564 22:06:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:14.564 22:06:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.564 22:06:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.564 22:06:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.564 22:06:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.564 22:06:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.564 22:06:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.564 22:06:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.564 22:06:10 -- accel/accel.sh@42 -- # jq -r . 00:06:14.564 [2024-11-17 22:06:10.902892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.564 [2024-11-17 22:06:10.903904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:06:14.564 [2024-11-17 22:06:11.046048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.564 [2024-11-17 22:06:11.130420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.942 22:06:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:15.942 00:06:15.942 SPDK Configuration: 00:06:15.942 Core mask: 0x1 00:06:15.942 00:06:15.942 Accel Perf Configuration: 00:06:15.942 Workload Type: decompress 00:06:15.942 Transfer size: 111250 bytes 00:06:15.942 Vector count 1 00:06:15.942 Module: software 00:06:15.942 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.942 Queue depth: 32 00:06:15.942 Allocate depth: 32 00:06:15.942 # threads/core: 1 00:06:15.942 Run time: 1 seconds 00:06:15.942 Verify: Yes 00:06:15.942 00:06:15.942 Running for 1 seconds... 00:06:15.942 00:06:15.942 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.942 ------------------------------------------------------------------------------------ 00:06:15.942 0,0 5664/s 233 MiB/s 0 0 00:06:15.942 ==================================================================================== 00:06:15.942 Total 5664/s 600 MiB/s 0 0' 00:06:15.942 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:15.942 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:15.942 22:06:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:15.942 22:06:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.942 22:06:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:15.942 22:06:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.942 22:06:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.942 22:06:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.942 22:06:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.942 22:06:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.942 22:06:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.942 22:06:12 -- accel/accel.sh@42 -- # jq -r . 00:06:15.942 [2024-11-17 22:06:12.494401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.942 [2024-11-17 22:06:12.494844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59311 ] 00:06:16.202 [2024-11-17 22:06:12.628837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.202 [2024-11-17 22:06:12.714110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val=0x1 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val=decompress 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val=software 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val=32 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val=32 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val=1 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val=Yes 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.202 22:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.202 22:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.202 22:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.579 22:06:14 -- accel/accel.sh@21 -- # val= 00:06:17.579 22:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:17.579 22:06:14 -- accel/accel.sh@21 -- # val= 00:06:17.579 22:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:17.579 22:06:14 -- accel/accel.sh@21 -- # val= 00:06:17.579 22:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:17.579 22:06:14 -- accel/accel.sh@21 -- # val= 00:06:17.579 22:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:17.579 22:06:14 -- accel/accel.sh@21 -- # val= 00:06:17.579 22:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:17.579 22:06:14 -- accel/accel.sh@21 -- # val= 00:06:17.579 22:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:17.579 22:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:17.579 22:06:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.579 22:06:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:17.579 ************************************ 00:06:17.579 END TEST accel_decmop_full 00:06:17.579 ************************************ 00:06:17.579 22:06:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.579 00:06:17.579 real 0m3.163s 00:06:17.579 user 0m2.659s 00:06:17.579 sys 0m0.298s 00:06:17.579 22:06:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.579 22:06:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.579 22:06:14 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.579 22:06:14 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:17.579 22:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.579 22:06:14 -- common/autotest_common.sh@10 -- # set +x 00:06:17.579 ************************************ 00:06:17.579 START TEST accel_decomp_mcore 00:06:17.579 ************************************ 00:06:17.579 22:06:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.579 22:06:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.579 22:06:14 -- accel/accel.sh@17 -- # local accel_module 00:06:17.579 22:06:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.579 22:06:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.579 22:06:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.579 22:06:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.579 22:06:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.579 22:06:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.579 22:06:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.579 22:06:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.579 22:06:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.579 22:06:14 -- accel/accel.sh@42 -- # jq -r . 00:06:17.579 [2024-11-17 22:06:14.111462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.579 [2024-11-17 22:06:14.111552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:06:17.837 [2024-11-17 22:06:14.243465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.837 [2024-11-17 22:06:14.331022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.837 [2024-11-17 22:06:14.331170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.837 [2024-11-17 22:06:14.331305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.837 [2024-11-17 22:06:14.331305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.214 22:06:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:19.214 00:06:19.214 SPDK Configuration: 00:06:19.214 Core mask: 0xf 00:06:19.214 00:06:19.214 Accel Perf Configuration: 00:06:19.214 Workload Type: decompress 00:06:19.214 Transfer size: 4096 bytes 00:06:19.214 Vector count 1 00:06:19.214 Module: software 00:06:19.214 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.214 Queue depth: 32 00:06:19.214 Allocate depth: 32 00:06:19.214 # threads/core: 1 00:06:19.214 Run time: 1 seconds 00:06:19.214 Verify: Yes 00:06:19.214 00:06:19.214 Running for 1 seconds... 00:06:19.214 00:06:19.214 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.214 ------------------------------------------------------------------------------------ 00:06:19.214 0,0 58176/s 107 MiB/s 0 0 00:06:19.214 3,0 56416/s 103 MiB/s 0 0 00:06:19.214 2,0 55488/s 102 MiB/s 0 0 00:06:19.214 1,0 53024/s 97 MiB/s 0 0 00:06:19.214 ==================================================================================== 00:06:19.214 Total 223104/s 871 MiB/s 0 0' 00:06:19.214 22:06:15 -- accel/accel.sh@20 -- # IFS=: 00:06:19.214 22:06:15 -- accel/accel.sh@20 -- # read -r var val 00:06:19.214 22:06:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:19.214 22:06:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:19.214 22:06:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.214 22:06:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.214 22:06:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.214 22:06:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.214 22:06:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.214 22:06:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.214 22:06:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.214 22:06:15 -- accel/accel.sh@42 -- # jq -r . 00:06:19.214 [2024-11-17 22:06:15.715569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.214 [2024-11-17 22:06:15.715644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59370 ] 00:06:19.473 [2024-11-17 22:06:15.847547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.473 [2024-11-17 22:06:15.931880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.473 [2024-11-17 22:06:15.932022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.473 [2024-11-17 22:06:15.932129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.473 [2024-11-17 22:06:15.932488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val= 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val= 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val= 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val=0xf 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val= 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val= 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val=decompress 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val= 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val=software 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val=32 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val=32 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val=1 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val=Yes 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val= 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:19.473 22:06:16 -- accel/accel.sh@21 -- # val= 00:06:19.473 22:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # IFS=: 00:06:19.473 22:06:16 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.851 22:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.851 22:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.851 22:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.851 22:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.851 22:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.851 22:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.851 22:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.851 22:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.851 22:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.851 22:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.851 22:06:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.851 22:06:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:20.851 22:06:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.851 00:06:20.851 real 0m3.197s 00:06:20.851 user 0m9.977s 00:06:20.851 sys 0m0.312s 00:06:20.851 22:06:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.851 ************************************ 00:06:20.851 END TEST accel_decomp_mcore 00:06:20.851 ************************************ 00:06:20.851 22:06:17 -- common/autotest_common.sh@10 -- # set +x 00:06:20.851 22:06:17 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:20.851 22:06:17 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:20.851 22:06:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.851 22:06:17 -- common/autotest_common.sh@10 -- # set +x 00:06:20.851 ************************************ 00:06:20.851 START TEST accel_decomp_full_mcore 00:06:20.851 ************************************ 00:06:20.851 22:06:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:20.851 22:06:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.851 22:06:17 -- accel/accel.sh@17 -- # local accel_module 00:06:20.851 22:06:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:20.851 22:06:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:20.851 22:06:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.851 22:06:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.851 22:06:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.851 22:06:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.851 22:06:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.851 22:06:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.851 22:06:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.851 22:06:17 -- accel/accel.sh@42 -- # jq -r . 00:06:20.851 [2024-11-17 22:06:17.366312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.851 [2024-11-17 22:06:17.366432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59413 ] 00:06:21.110 [2024-11-17 22:06:17.503817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.110 [2024-11-17 22:06:17.587141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.110 [2024-11-17 22:06:17.587341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.110 [2024-11-17 22:06:17.587453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.110 [2024-11-17 22:06:17.587726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.488 22:06:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:22.488 00:06:22.488 SPDK Configuration: 00:06:22.488 Core mask: 0xf 00:06:22.488 00:06:22.488 Accel Perf Configuration: 00:06:22.488 Workload Type: decompress 00:06:22.488 Transfer size: 111250 bytes 00:06:22.488 Vector count 1 00:06:22.488 Module: software 00:06:22.488 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.488 Queue depth: 32 00:06:22.488 Allocate depth: 32 00:06:22.488 # threads/core: 1 00:06:22.488 Run time: 1 seconds 00:06:22.488 Verify: Yes 00:06:22.488 00:06:22.488 Running for 1 seconds... 00:06:22.488 00:06:22.488 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.488 ------------------------------------------------------------------------------------ 00:06:22.488 0,0 5248/s 216 MiB/s 0 0 00:06:22.488 3,0 5248/s 216 MiB/s 0 0 00:06:22.488 2,0 4864/s 200 MiB/s 0 0 00:06:22.488 1,0 5248/s 216 MiB/s 0 0 00:06:22.488 ==================================================================================== 00:06:22.488 Total 20608/s 2186 MiB/s 0 0' 00:06:22.488 22:06:18 -- accel/accel.sh@20 -- # IFS=: 00:06:22.488 22:06:18 -- accel/accel.sh@20 -- # read -r var val 00:06:22.488 22:06:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:22.488 22:06:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.488 22:06:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:22.488 22:06:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.488 22:06:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.488 22:06:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.488 22:06:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.488 22:06:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.488 22:06:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.488 22:06:18 -- accel/accel.sh@42 -- # jq -r . 00:06:22.488 [2024-11-17 22:06:19.015666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.488 [2024-11-17 22:06:19.016191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:06:22.747 [2024-11-17 22:06:19.148251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.747 [2024-11-17 22:06:19.230481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.747 [2024-11-17 22:06:19.230678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.747 [2024-11-17 22:06:19.230798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.747 [2024-11-17 22:06:19.231143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val= 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val= 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val= 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val=0xf 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val= 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val= 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val=decompress 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val= 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val=software 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val=32 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val=32 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val=1 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val=Yes 00:06:22.747 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.747 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.747 22:06:19 -- accel/accel.sh@21 -- # val= 00:06:22.748 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.748 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.748 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:22.748 22:06:19 -- accel/accel.sh@21 -- # val= 00:06:22.748 22:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.748 22:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:22.748 22:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@21 -- # val= 00:06:24.125 22:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # IFS=: 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@21 -- # val= 00:06:24.125 22:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # IFS=: 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@21 -- # val= 00:06:24.125 22:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # IFS=: 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@21 -- # val= 00:06:24.125 22:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # IFS=: 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@21 -- # val= 00:06:24.125 22:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # IFS=: 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@21 -- # val= 00:06:24.125 22:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # IFS=: 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@21 -- # val= 00:06:24.125 22:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # IFS=: 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@21 -- # val= 00:06:24.125 22:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # IFS=: 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@21 -- # val= 00:06:24.125 22:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # IFS=: 00:06:24.125 22:06:20 -- accel/accel.sh@20 -- # read -r var val 00:06:24.125 22:06:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.125 22:06:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:24.125 22:06:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.125 00:06:24.125 real 0m3.248s 00:06:24.125 user 0m10.157s 00:06:24.125 sys 0m0.338s 00:06:24.125 22:06:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.125 22:06:20 -- common/autotest_common.sh@10 -- # set +x 00:06:24.125 ************************************ 00:06:24.125 END TEST accel_decomp_full_mcore 00:06:24.125 ************************************ 00:06:24.125 22:06:20 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:24.125 22:06:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:24.125 22:06:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.125 22:06:20 -- common/autotest_common.sh@10 -- # set +x 00:06:24.125 ************************************ 00:06:24.125 START TEST accel_decomp_mthread 00:06:24.125 ************************************ 00:06:24.125 22:06:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:24.125 22:06:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.125 22:06:20 -- accel/accel.sh@17 -- # local accel_module 00:06:24.125 22:06:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:24.125 22:06:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:24.125 22:06:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.125 22:06:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.125 22:06:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.125 22:06:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.125 22:06:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.125 22:06:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.126 22:06:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.126 22:06:20 -- accel/accel.sh@42 -- # jq -r . 00:06:24.126 [2024-11-17 22:06:20.662065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.126 [2024-11-17 22:06:20.662188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59473 ] 00:06:24.385 [2024-11-17 22:06:20.793413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.385 [2024-11-17 22:06:20.877864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.763 22:06:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:25.763 00:06:25.763 SPDK Configuration: 00:06:25.763 Core mask: 0x1 00:06:25.763 00:06:25.763 Accel Perf Configuration: 00:06:25.763 Workload Type: decompress 00:06:25.763 Transfer size: 4096 bytes 00:06:25.763 Vector count 1 00:06:25.763 Module: software 00:06:25.763 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.763 Queue depth: 32 00:06:25.763 Allocate depth: 32 00:06:25.763 # threads/core: 2 00:06:25.763 Run time: 1 seconds 00:06:25.763 Verify: Yes 00:06:25.763 00:06:25.763 Running for 1 seconds... 00:06:25.763 00:06:25.763 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.763 ------------------------------------------------------------------------------------ 00:06:25.763 0,1 42656/s 78 MiB/s 0 0 00:06:25.763 0,0 42496/s 78 MiB/s 0 0 00:06:25.763 ==================================================================================== 00:06:25.763 Total 85152/s 332 MiB/s 0 0' 00:06:25.763 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:25.763 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:25.763 22:06:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:25.763 22:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.763 22:06:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:25.763 22:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.763 22:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.763 22:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.763 22:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.763 22:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.763 22:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.763 22:06:22 -- accel/accel.sh@42 -- # jq -r . 00:06:25.763 [2024-11-17 22:06:22.226588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.763 [2024-11-17 22:06:22.226700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59498 ] 00:06:25.763 [2024-11-17 22:06:22.358669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.022 [2024-11-17 22:06:22.437544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val=0x1 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val=decompress 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val=software 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val=32 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val=32 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val=2 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val=Yes 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.022 22:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.022 22:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.022 22:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.399 22:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.399 22:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.399 22:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.399 22:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.399 22:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.399 22:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.399 22:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.399 22:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.399 22:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.399 22:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.399 22:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.399 22:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.399 22:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.399 22:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.399 22:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.399 22:06:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.399 22:06:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:27.399 22:06:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.399 00:06:27.399 real 0m3.122s 00:06:27.399 user 0m2.644s 00:06:27.400 sys 0m0.275s 00:06:27.400 22:06:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.400 22:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:27.400 ************************************ 00:06:27.400 END TEST accel_decomp_mthread 00:06:27.400 ************************************ 00:06:27.400 22:06:23 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:27.400 22:06:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:27.400 22:06:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.400 22:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:27.400 ************************************ 00:06:27.400 START TEST accel_deomp_full_mthread 00:06:27.400 ************************************ 00:06:27.400 22:06:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:27.400 22:06:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.400 22:06:23 -- accel/accel.sh@17 -- # local accel_module 00:06:27.400 22:06:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:27.400 22:06:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:27.400 22:06:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.400 22:06:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.400 22:06:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.400 22:06:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.400 22:06:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.400 22:06:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.400 22:06:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.400 22:06:23 -- accel/accel.sh@42 -- # jq -r . 00:06:27.400 [2024-11-17 22:06:23.846193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.400 [2024-11-17 22:06:23.846293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59527 ] 00:06:27.400 [2024-11-17 22:06:23.982372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.659 [2024-11-17 22:06:24.065048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.034 22:06:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:29.034 00:06:29.034 SPDK Configuration: 00:06:29.034 Core mask: 0x1 00:06:29.034 00:06:29.034 Accel Perf Configuration: 00:06:29.034 Workload Type: decompress 00:06:29.034 Transfer size: 111250 bytes 00:06:29.034 Vector count 1 00:06:29.034 Module: software 00:06:29.034 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.034 Queue depth: 32 00:06:29.034 Allocate depth: 32 00:06:29.034 # threads/core: 2 00:06:29.034 Run time: 1 seconds 00:06:29.034 Verify: Yes 00:06:29.034 00:06:29.034 Running for 1 seconds... 00:06:29.034 00:06:29.034 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.034 ------------------------------------------------------------------------------------ 00:06:29.034 0,1 2880/s 118 MiB/s 0 0 00:06:29.034 0,0 2880/s 118 MiB/s 0 0 00:06:29.034 ==================================================================================== 00:06:29.034 Total 5760/s 611 MiB/s 0 0' 00:06:29.034 22:06:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.034 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.034 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.034 22:06:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.034 22:06:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.034 22:06:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.034 22:06:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.034 22:06:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.034 22:06:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.034 22:06:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.034 22:06:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.034 22:06:25 -- accel/accel.sh@42 -- # jq -r . 00:06:29.034 [2024-11-17 22:06:25.415305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.034 [2024-11-17 22:06:25.415381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59552 ] 00:06:29.034 [2024-11-17 22:06:25.537952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.034 [2024-11-17 22:06:25.627709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val= 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val= 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val= 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val=0x1 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val= 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val= 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val=decompress 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val= 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val=software 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val=32 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val=32 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val=2 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val=Yes 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val= 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.293 22:06:25 -- accel/accel.sh@21 -- # val= 00:06:29.293 22:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:29.293 22:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.709 22:06:26 -- accel/accel.sh@21 -- # val= 00:06:30.709 22:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:30.709 22:06:26 -- accel/accel.sh@21 -- # val= 00:06:30.709 22:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:30.709 22:06:26 -- accel/accel.sh@21 -- # val= 00:06:30.709 22:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:30.709 22:06:26 -- accel/accel.sh@21 -- # val= 00:06:30.709 22:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:30.709 22:06:26 -- accel/accel.sh@21 -- # val= 00:06:30.709 22:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:30.709 22:06:26 -- accel/accel.sh@21 -- # val= 00:06:30.709 22:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:30.709 22:06:26 -- accel/accel.sh@21 -- # val= 00:06:30.709 22:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:30.709 22:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:30.709 22:06:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.709 22:06:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:30.709 22:06:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.709 00:06:30.709 real 0m3.145s 00:06:30.709 user 0m2.684s 00:06:30.709 sys 0m0.258s 00:06:30.709 22:06:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.709 22:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:30.709 ************************************ 00:06:30.709 END TEST accel_deomp_full_mthread 00:06:30.709 ************************************ 00:06:30.709 22:06:27 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:30.709 22:06:27 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:30.709 22:06:27 -- accel/accel.sh@129 -- # build_accel_config 00:06:30.709 22:06:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.709 22:06:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:30.709 22:06:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.709 22:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.709 22:06:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.709 22:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:30.709 22:06:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.709 22:06:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.709 22:06:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.709 22:06:27 -- accel/accel.sh@42 -- # jq -r . 00:06:30.709 ************************************ 00:06:30.709 START TEST accel_dif_functional_tests 00:06:30.709 ************************************ 00:06:30.709 22:06:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:30.709 [2024-11-17 22:06:27.067483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.710 [2024-11-17 22:06:27.067677] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59582 ] 00:06:30.710 [2024-11-17 22:06:27.199849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.710 [2024-11-17 22:06:27.282814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.710 [2024-11-17 22:06:27.282999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.710 [2024-11-17 22:06:27.283003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.969 00:06:30.969 00:06:30.969 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.969 http://cunit.sourceforge.net/ 00:06:30.969 00:06:30.969 00:06:30.969 Suite: accel_dif 00:06:30.969 Test: verify: DIF generated, GUARD check ...passed 00:06:30.969 Test: verify: DIF generated, APPTAG check ...passed 00:06:30.969 Test: verify: DIF generated, REFTAG check ...passed 00:06:30.969 Test: verify: DIF not generated, GUARD check ...[2024-11-17 22:06:27.408344] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:30.969 [2024-11-17 22:06:27.408482] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:30.969 passed 00:06:30.969 Test: verify: DIF not generated, APPTAG check ...[2024-11-17 22:06:27.408754] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:30.969 passed 00:06:30.969 Test: verify: DIF not generated, REFTAG check ...passed 00:06:30.969 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:30.969 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:30.969 Test: verify: APPTAG incorrect, no APPTAG check ...passed[2024-11-17 22:06:27.408983] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:30.969 [2024-11-17 22:06:27.409080] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:30.969 [2024-11-17 22:06:27.409172] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:30.969 [2024-11-17 22:06:27.409355] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:30.969 00:06:30.969 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:30.969 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:30.969 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-17 22:06:27.409828] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:30.969 passed 00:06:30.969 Test: generate copy: DIF generated, GUARD check ...passed 00:06:30.969 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:30.969 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:30.969 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:30.969 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:30.969 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:30.969 Test: generate copy: iovecs-len validate ...[2024-11-17 22:06:27.411097] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:30.969 passed 00:06:30.969 Test: generate copy: buffer alignment validate ...passed 00:06:30.969 00:06:30.969 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.969 suites 1 1 n/a 0 0 00:06:30.969 tests 20 20 20 0 0 00:06:30.969 asserts 204 204 204 0 n/a 00:06:30.969 00:06:30.969 Elapsed time = 0.009 seconds 00:06:31.228 00:06:31.228 real 0m0.688s 00:06:31.228 user 0m1.024s 00:06:31.228 sys 0m0.191s 00:06:31.228 22:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.228 22:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:31.228 ************************************ 00:06:31.228 END TEST accel_dif_functional_tests 00:06:31.228 ************************************ 00:06:31.228 00:06:31.228 real 1m8.073s 00:06:31.228 user 1m12.259s 00:06:31.228 sys 0m7.444s 00:06:31.228 ************************************ 00:06:31.228 END TEST accel 00:06:31.228 ************************************ 00:06:31.228 22:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.228 22:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:31.228 22:06:27 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:31.228 22:06:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.228 22:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.228 22:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:31.228 ************************************ 00:06:31.228 START TEST accel_rpc 00:06:31.228 ************************************ 00:06:31.228 22:06:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:31.487 * Looking for test storage... 00:06:31.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:31.487 22:06:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:31.487 22:06:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:31.487 22:06:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:31.487 22:06:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:31.487 22:06:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:31.487 22:06:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:31.487 22:06:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:31.487 22:06:27 -- scripts/common.sh@335 -- # IFS=.-: 00:06:31.487 22:06:27 -- scripts/common.sh@335 -- # read -ra ver1 00:06:31.487 22:06:27 -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.487 22:06:27 -- scripts/common.sh@336 -- # read -ra ver2 00:06:31.487 22:06:27 -- scripts/common.sh@337 -- # local 'op=<' 00:06:31.487 22:06:27 -- scripts/common.sh@339 -- # ver1_l=2 00:06:31.487 22:06:27 -- scripts/common.sh@340 -- # ver2_l=1 00:06:31.487 22:06:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:31.487 22:06:27 -- scripts/common.sh@343 -- # case "$op" in 00:06:31.487 22:06:27 -- scripts/common.sh@344 -- # : 1 00:06:31.487 22:06:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:31.487 22:06:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.487 22:06:27 -- scripts/common.sh@364 -- # decimal 1 00:06:31.487 22:06:27 -- scripts/common.sh@352 -- # local d=1 00:06:31.487 22:06:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.487 22:06:27 -- scripts/common.sh@354 -- # echo 1 00:06:31.487 22:06:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:31.487 22:06:27 -- scripts/common.sh@365 -- # decimal 2 00:06:31.487 22:06:27 -- scripts/common.sh@352 -- # local d=2 00:06:31.487 22:06:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.487 22:06:27 -- scripts/common.sh@354 -- # echo 2 00:06:31.487 22:06:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:31.487 22:06:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:31.487 22:06:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:31.487 22:06:27 -- scripts/common.sh@367 -- # return 0 00:06:31.487 22:06:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.487 22:06:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:31.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.487 --rc genhtml_branch_coverage=1 00:06:31.487 --rc genhtml_function_coverage=1 00:06:31.487 --rc genhtml_legend=1 00:06:31.487 --rc geninfo_all_blocks=1 00:06:31.487 --rc geninfo_unexecuted_blocks=1 00:06:31.487 00:06:31.487 ' 00:06:31.487 22:06:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:31.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.487 --rc genhtml_branch_coverage=1 00:06:31.487 --rc genhtml_function_coverage=1 00:06:31.487 --rc genhtml_legend=1 00:06:31.487 --rc geninfo_all_blocks=1 00:06:31.487 --rc geninfo_unexecuted_blocks=1 00:06:31.487 00:06:31.487 ' 00:06:31.487 22:06:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:31.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.487 --rc genhtml_branch_coverage=1 00:06:31.487 --rc genhtml_function_coverage=1 00:06:31.487 --rc genhtml_legend=1 00:06:31.487 --rc geninfo_all_blocks=1 00:06:31.487 --rc geninfo_unexecuted_blocks=1 00:06:31.487 00:06:31.487 ' 00:06:31.487 22:06:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:31.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.487 --rc genhtml_branch_coverage=1 00:06:31.487 --rc genhtml_function_coverage=1 00:06:31.487 --rc genhtml_legend=1 00:06:31.487 --rc geninfo_all_blocks=1 00:06:31.487 --rc geninfo_unexecuted_blocks=1 00:06:31.487 00:06:31.487 ' 00:06:31.487 22:06:27 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.487 22:06:27 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59660 00:06:31.487 22:06:27 -- accel/accel_rpc.sh@15 -- # waitforlisten 59660 00:06:31.487 22:06:27 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:31.487 22:06:27 -- common/autotest_common.sh@829 -- # '[' -z 59660 ']' 00:06:31.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.487 22:06:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.487 22:06:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.487 22:06:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.487 22:06:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.487 22:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:31.487 [2024-11-17 22:06:28.051348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.487 [2024-11-17 22:06:28.051451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59660 ] 00:06:31.746 [2024-11-17 22:06:28.190664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.746 [2024-11-17 22:06:28.272720] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.746 [2024-11-17 22:06:28.272892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.314 22:06:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.314 22:06:28 -- common/autotest_common.sh@862 -- # return 0 00:06:32.314 22:06:28 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:32.314 22:06:28 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:32.314 22:06:28 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:32.314 22:06:28 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:32.314 22:06:28 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:32.314 22:06:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.314 22:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.314 22:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:32.314 ************************************ 00:06:32.314 START TEST accel_assign_opcode 00:06:32.314 ************************************ 00:06:32.314 22:06:28 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:06:32.314 22:06:28 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:32.314 22:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.314 22:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:32.573 [2024-11-17 22:06:28.933348] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:32.573 22:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.573 22:06:28 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:32.573 22:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.573 22:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:32.573 [2024-11-17 22:06:28.941347] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:32.573 22:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.573 22:06:28 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:32.573 22:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.573 22:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:32.831 22:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.831 22:06:29 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:32.831 22:06:29 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:32.831 22:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.831 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.831 22:06:29 -- accel/accel_rpc.sh@42 -- # grep software 00:06:32.831 22:06:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.831 software 00:06:32.831 ************************************ 00:06:32.831 END TEST accel_assign_opcode 00:06:32.831 ************************************ 00:06:32.831 00:06:32.831 real 0m0.351s 00:06:32.831 user 0m0.041s 00:06:32.831 sys 0m0.013s 00:06:32.831 22:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.831 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.831 22:06:29 -- accel/accel_rpc.sh@55 -- # killprocess 59660 00:06:32.831 22:06:29 -- common/autotest_common.sh@936 -- # '[' -z 59660 ']' 00:06:32.831 22:06:29 -- common/autotest_common.sh@940 -- # kill -0 59660 00:06:32.831 22:06:29 -- common/autotest_common.sh@941 -- # uname 00:06:32.831 22:06:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.831 22:06:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59660 00:06:32.831 killing process with pid 59660 00:06:32.831 22:06:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.831 22:06:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.831 22:06:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59660' 00:06:32.831 22:06:29 -- common/autotest_common.sh@955 -- # kill 59660 00:06:32.831 22:06:29 -- common/autotest_common.sh@960 -- # wait 59660 00:06:33.398 00:06:33.398 real 0m2.115s 00:06:33.398 user 0m2.003s 00:06:33.398 sys 0m0.563s 00:06:33.398 22:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.398 ************************************ 00:06:33.398 END TEST accel_rpc 00:06:33.398 ************************************ 00:06:33.398 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:33.398 22:06:29 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.398 22:06:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.398 22:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.398 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:33.398 ************************************ 00:06:33.398 START TEST app_cmdline 00:06:33.398 ************************************ 00:06:33.398 22:06:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.657 * Looking for test storage... 00:06:33.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:33.657 22:06:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:33.657 22:06:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:33.657 22:06:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:33.657 22:06:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:33.657 22:06:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:33.657 22:06:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:33.657 22:06:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:33.657 22:06:30 -- scripts/common.sh@335 -- # IFS=.-: 00:06:33.657 22:06:30 -- scripts/common.sh@335 -- # read -ra ver1 00:06:33.657 22:06:30 -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.657 22:06:30 -- scripts/common.sh@336 -- # read -ra ver2 00:06:33.657 22:06:30 -- scripts/common.sh@337 -- # local 'op=<' 00:06:33.657 22:06:30 -- scripts/common.sh@339 -- # ver1_l=2 00:06:33.657 22:06:30 -- scripts/common.sh@340 -- # ver2_l=1 00:06:33.657 22:06:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:33.657 22:06:30 -- scripts/common.sh@343 -- # case "$op" in 00:06:33.657 22:06:30 -- scripts/common.sh@344 -- # : 1 00:06:33.657 22:06:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:33.657 22:06:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.657 22:06:30 -- scripts/common.sh@364 -- # decimal 1 00:06:33.657 22:06:30 -- scripts/common.sh@352 -- # local d=1 00:06:33.657 22:06:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.657 22:06:30 -- scripts/common.sh@354 -- # echo 1 00:06:33.657 22:06:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:33.657 22:06:30 -- scripts/common.sh@365 -- # decimal 2 00:06:33.657 22:06:30 -- scripts/common.sh@352 -- # local d=2 00:06:33.657 22:06:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.657 22:06:30 -- scripts/common.sh@354 -- # echo 2 00:06:33.657 22:06:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:33.657 22:06:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:33.657 22:06:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:33.657 22:06:30 -- scripts/common.sh@367 -- # return 0 00:06:33.657 22:06:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.657 22:06:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:33.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.657 --rc genhtml_branch_coverage=1 00:06:33.657 --rc genhtml_function_coverage=1 00:06:33.657 --rc genhtml_legend=1 00:06:33.657 --rc geninfo_all_blocks=1 00:06:33.657 --rc geninfo_unexecuted_blocks=1 00:06:33.657 00:06:33.657 ' 00:06:33.657 22:06:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:33.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.657 --rc genhtml_branch_coverage=1 00:06:33.657 --rc genhtml_function_coverage=1 00:06:33.657 --rc genhtml_legend=1 00:06:33.657 --rc geninfo_all_blocks=1 00:06:33.657 --rc geninfo_unexecuted_blocks=1 00:06:33.657 00:06:33.657 ' 00:06:33.657 22:06:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:33.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.657 --rc genhtml_branch_coverage=1 00:06:33.657 --rc genhtml_function_coverage=1 00:06:33.657 --rc genhtml_legend=1 00:06:33.657 --rc geninfo_all_blocks=1 00:06:33.657 --rc geninfo_unexecuted_blocks=1 00:06:33.657 00:06:33.657 ' 00:06:33.657 22:06:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:33.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.657 --rc genhtml_branch_coverage=1 00:06:33.657 --rc genhtml_function_coverage=1 00:06:33.657 --rc genhtml_legend=1 00:06:33.657 --rc geninfo_all_blocks=1 00:06:33.657 --rc geninfo_unexecuted_blocks=1 00:06:33.657 00:06:33.657 ' 00:06:33.657 22:06:30 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:33.657 22:06:30 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59778 00:06:33.657 22:06:30 -- app/cmdline.sh@18 -- # waitforlisten 59778 00:06:33.657 22:06:30 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:33.657 22:06:30 -- common/autotest_common.sh@829 -- # '[' -z 59778 ']' 00:06:33.657 22:06:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.657 22:06:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.657 22:06:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.657 22:06:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.657 22:06:30 -- common/autotest_common.sh@10 -- # set +x 00:06:33.657 [2024-11-17 22:06:30.239297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.657 [2024-11-17 22:06:30.239655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59778 ] 00:06:33.916 [2024-11-17 22:06:30.376237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.916 [2024-11-17 22:06:30.468022] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.916 [2024-11-17 22:06:30.468484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.853 22:06:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.853 22:06:31 -- common/autotest_common.sh@862 -- # return 0 00:06:34.853 22:06:31 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:35.112 { 00:06:35.112 "fields": { 00:06:35.112 "commit": "c13c99a5e", 00:06:35.112 "major": 24, 00:06:35.112 "minor": 1, 00:06:35.112 "patch": 1, 00:06:35.112 "suffix": "-pre" 00:06:35.112 }, 00:06:35.112 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:06:35.112 } 00:06:35.112 22:06:31 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:35.112 22:06:31 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:35.112 22:06:31 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:35.112 22:06:31 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:35.112 22:06:31 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:35.112 22:06:31 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:35.112 22:06:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.112 22:06:31 -- app/cmdline.sh@26 -- # sort 00:06:35.112 22:06:31 -- common/autotest_common.sh@10 -- # set +x 00:06:35.112 22:06:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.112 22:06:31 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:35.112 22:06:31 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:35.112 22:06:31 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.112 22:06:31 -- common/autotest_common.sh@650 -- # local es=0 00:06:35.112 22:06:31 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.112 22:06:31 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.112 22:06:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.112 22:06:31 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.112 22:06:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.112 22:06:31 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.112 22:06:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.112 22:06:31 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:35.112 22:06:31 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:35.112 22:06:31 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.370 2024/11/17 22:06:31 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:35.370 request: 00:06:35.370 { 00:06:35.370 "method": "env_dpdk_get_mem_stats", 00:06:35.370 "params": {} 00:06:35.370 } 00:06:35.370 Got JSON-RPC error response 00:06:35.370 GoRPCClient: error on JSON-RPC call 00:06:35.370 22:06:31 -- common/autotest_common.sh@653 -- # es=1 00:06:35.370 22:06:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.370 22:06:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.370 22:06:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.370 22:06:31 -- app/cmdline.sh@1 -- # killprocess 59778 00:06:35.370 22:06:31 -- common/autotest_common.sh@936 -- # '[' -z 59778 ']' 00:06:35.370 22:06:31 -- common/autotest_common.sh@940 -- # kill -0 59778 00:06:35.370 22:06:31 -- common/autotest_common.sh@941 -- # uname 00:06:35.370 22:06:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.370 22:06:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59778 00:06:35.370 22:06:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:35.370 22:06:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:35.370 killing process with pid 59778 00:06:35.370 22:06:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59778' 00:06:35.370 22:06:31 -- common/autotest_common.sh@955 -- # kill 59778 00:06:35.370 22:06:31 -- common/autotest_common.sh@960 -- # wait 59778 00:06:35.937 00:06:35.937 real 0m2.365s 00:06:35.937 user 0m2.745s 00:06:35.937 sys 0m0.612s 00:06:35.937 22:06:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.937 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.937 ************************************ 00:06:35.937 END TEST app_cmdline 00:06:35.937 ************************************ 00:06:35.937 22:06:32 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.937 22:06:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.937 22:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.937 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:35.937 ************************************ 00:06:35.937 START TEST version 00:06:35.937 ************************************ 00:06:35.937 22:06:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.937 * Looking for test storage... 00:06:35.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:35.937 22:06:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:35.937 22:06:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:35.937 22:06:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:36.196 22:06:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:36.196 22:06:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:36.196 22:06:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:36.196 22:06:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:36.196 22:06:32 -- scripts/common.sh@335 -- # IFS=.-: 00:06:36.196 22:06:32 -- scripts/common.sh@335 -- # read -ra ver1 00:06:36.196 22:06:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.196 22:06:32 -- scripts/common.sh@336 -- # read -ra ver2 00:06:36.196 22:06:32 -- scripts/common.sh@337 -- # local 'op=<' 00:06:36.196 22:06:32 -- scripts/common.sh@339 -- # ver1_l=2 00:06:36.196 22:06:32 -- scripts/common.sh@340 -- # ver2_l=1 00:06:36.196 22:06:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:36.196 22:06:32 -- scripts/common.sh@343 -- # case "$op" in 00:06:36.196 22:06:32 -- scripts/common.sh@344 -- # : 1 00:06:36.196 22:06:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:36.196 22:06:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.196 22:06:32 -- scripts/common.sh@364 -- # decimal 1 00:06:36.196 22:06:32 -- scripts/common.sh@352 -- # local d=1 00:06:36.196 22:06:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.196 22:06:32 -- scripts/common.sh@354 -- # echo 1 00:06:36.196 22:06:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:36.196 22:06:32 -- scripts/common.sh@365 -- # decimal 2 00:06:36.196 22:06:32 -- scripts/common.sh@352 -- # local d=2 00:06:36.196 22:06:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.196 22:06:32 -- scripts/common.sh@354 -- # echo 2 00:06:36.196 22:06:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:36.196 22:06:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:36.196 22:06:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:36.196 22:06:32 -- scripts/common.sh@367 -- # return 0 00:06:36.196 22:06:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.196 22:06:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.196 --rc genhtml_branch_coverage=1 00:06:36.196 --rc genhtml_function_coverage=1 00:06:36.196 --rc genhtml_legend=1 00:06:36.196 --rc geninfo_all_blocks=1 00:06:36.196 --rc geninfo_unexecuted_blocks=1 00:06:36.196 00:06:36.196 ' 00:06:36.196 22:06:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.196 --rc genhtml_branch_coverage=1 00:06:36.196 --rc genhtml_function_coverage=1 00:06:36.196 --rc genhtml_legend=1 00:06:36.196 --rc geninfo_all_blocks=1 00:06:36.196 --rc geninfo_unexecuted_blocks=1 00:06:36.196 00:06:36.196 ' 00:06:36.197 22:06:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:36.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.197 --rc genhtml_branch_coverage=1 00:06:36.197 --rc genhtml_function_coverage=1 00:06:36.197 --rc genhtml_legend=1 00:06:36.197 --rc geninfo_all_blocks=1 00:06:36.197 --rc geninfo_unexecuted_blocks=1 00:06:36.197 00:06:36.197 ' 00:06:36.197 22:06:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:36.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.197 --rc genhtml_branch_coverage=1 00:06:36.197 --rc genhtml_function_coverage=1 00:06:36.197 --rc genhtml_legend=1 00:06:36.197 --rc geninfo_all_blocks=1 00:06:36.197 --rc geninfo_unexecuted_blocks=1 00:06:36.197 00:06:36.197 ' 00:06:36.197 22:06:32 -- app/version.sh@17 -- # get_header_version major 00:06:36.197 22:06:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.197 22:06:32 -- app/version.sh@14 -- # cut -f2 00:06:36.197 22:06:32 -- app/version.sh@14 -- # tr -d '"' 00:06:36.197 22:06:32 -- app/version.sh@17 -- # major=24 00:06:36.197 22:06:32 -- app/version.sh@18 -- # get_header_version minor 00:06:36.197 22:06:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.197 22:06:32 -- app/version.sh@14 -- # cut -f2 00:06:36.197 22:06:32 -- app/version.sh@14 -- # tr -d '"' 00:06:36.197 22:06:32 -- app/version.sh@18 -- # minor=1 00:06:36.197 22:06:32 -- app/version.sh@19 -- # get_header_version patch 00:06:36.197 22:06:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.197 22:06:32 -- app/version.sh@14 -- # cut -f2 00:06:36.197 22:06:32 -- app/version.sh@14 -- # tr -d '"' 00:06:36.197 22:06:32 -- app/version.sh@19 -- # patch=1 00:06:36.197 22:06:32 -- app/version.sh@20 -- # get_header_version suffix 00:06:36.197 22:06:32 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.197 22:06:32 -- app/version.sh@14 -- # cut -f2 00:06:36.197 22:06:32 -- app/version.sh@14 -- # tr -d '"' 00:06:36.197 22:06:32 -- app/version.sh@20 -- # suffix=-pre 00:06:36.197 22:06:32 -- app/version.sh@22 -- # version=24.1 00:06:36.197 22:06:32 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.197 22:06:32 -- app/version.sh@25 -- # version=24.1.1 00:06:36.197 22:06:32 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:36.197 22:06:32 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:36.197 22:06:32 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.197 22:06:32 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:36.197 22:06:32 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:36.197 ************************************ 00:06:36.197 END TEST version 00:06:36.197 ************************************ 00:06:36.197 00:06:36.197 real 0m0.251s 00:06:36.197 user 0m0.158s 00:06:36.197 sys 0m0.126s 00:06:36.197 22:06:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.197 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:36.197 22:06:32 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:06:36.197 22:06:32 -- spdk/autotest.sh@191 -- # uname -s 00:06:36.197 22:06:32 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:06:36.197 22:06:32 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:36.197 22:06:32 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:36.197 22:06:32 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:06:36.197 22:06:32 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:06:36.197 22:06:32 -- spdk/autotest.sh@255 -- # timing_exit lib 00:06:36.197 22:06:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.197 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:36.197 22:06:32 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:06:36.197 22:06:32 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:06:36.197 22:06:32 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:06:36.197 22:06:32 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:06:36.197 22:06:32 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:06:36.197 22:06:32 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:06:36.197 22:06:32 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.197 22:06:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:36.197 22:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.197 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:36.197 ************************************ 00:06:36.197 START TEST nvmf_tcp 00:06:36.197 ************************************ 00:06:36.197 22:06:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.455 * Looking for test storage... 00:06:36.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:36.455 22:06:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:36.455 22:06:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:36.455 22:06:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:36.455 22:06:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:36.455 22:06:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:36.455 22:06:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:36.455 22:06:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:36.455 22:06:32 -- scripts/common.sh@335 -- # IFS=.-: 00:06:36.455 22:06:32 -- scripts/common.sh@335 -- # read -ra ver1 00:06:36.455 22:06:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.455 22:06:32 -- scripts/common.sh@336 -- # read -ra ver2 00:06:36.455 22:06:32 -- scripts/common.sh@337 -- # local 'op=<' 00:06:36.455 22:06:32 -- scripts/common.sh@339 -- # ver1_l=2 00:06:36.455 22:06:32 -- scripts/common.sh@340 -- # ver2_l=1 00:06:36.455 22:06:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:36.455 22:06:32 -- scripts/common.sh@343 -- # case "$op" in 00:06:36.455 22:06:32 -- scripts/common.sh@344 -- # : 1 00:06:36.455 22:06:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:36.455 22:06:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.456 22:06:32 -- scripts/common.sh@364 -- # decimal 1 00:06:36.456 22:06:32 -- scripts/common.sh@352 -- # local d=1 00:06:36.456 22:06:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.456 22:06:32 -- scripts/common.sh@354 -- # echo 1 00:06:36.456 22:06:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:36.456 22:06:32 -- scripts/common.sh@365 -- # decimal 2 00:06:36.456 22:06:32 -- scripts/common.sh@352 -- # local d=2 00:06:36.456 22:06:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.456 22:06:32 -- scripts/common.sh@354 -- # echo 2 00:06:36.456 22:06:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:36.456 22:06:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:36.456 22:06:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:36.456 22:06:32 -- scripts/common.sh@367 -- # return 0 00:06:36.456 22:06:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.456 22:06:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:36.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.456 --rc genhtml_branch_coverage=1 00:06:36.456 --rc genhtml_function_coverage=1 00:06:36.456 --rc genhtml_legend=1 00:06:36.456 --rc geninfo_all_blocks=1 00:06:36.456 --rc geninfo_unexecuted_blocks=1 00:06:36.456 00:06:36.456 ' 00:06:36.456 22:06:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:36.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.456 --rc genhtml_branch_coverage=1 00:06:36.456 --rc genhtml_function_coverage=1 00:06:36.456 --rc genhtml_legend=1 00:06:36.456 --rc geninfo_all_blocks=1 00:06:36.456 --rc geninfo_unexecuted_blocks=1 00:06:36.456 00:06:36.456 ' 00:06:36.456 22:06:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:36.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.456 --rc genhtml_branch_coverage=1 00:06:36.456 --rc genhtml_function_coverage=1 00:06:36.456 --rc genhtml_legend=1 00:06:36.456 --rc geninfo_all_blocks=1 00:06:36.456 --rc geninfo_unexecuted_blocks=1 00:06:36.456 00:06:36.456 ' 00:06:36.456 22:06:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:36.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.456 --rc genhtml_branch_coverage=1 00:06:36.456 --rc genhtml_function_coverage=1 00:06:36.456 --rc genhtml_legend=1 00:06:36.456 --rc geninfo_all_blocks=1 00:06:36.456 --rc geninfo_unexecuted_blocks=1 00:06:36.456 00:06:36.456 ' 00:06:36.456 22:06:32 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:36.456 22:06:32 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:36.456 22:06:32 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:36.456 22:06:32 -- nvmf/common.sh@7 -- # uname -s 00:06:36.456 22:06:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.456 22:06:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.456 22:06:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.456 22:06:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.456 22:06:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.456 22:06:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.456 22:06:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.456 22:06:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.456 22:06:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.456 22:06:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.456 22:06:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:06:36.456 22:06:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:06:36.456 22:06:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.456 22:06:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.456 22:06:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:36.456 22:06:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.456 22:06:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.456 22:06:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.456 22:06:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.456 22:06:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.456 22:06:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.456 22:06:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.456 22:06:32 -- paths/export.sh@5 -- # export PATH 00:06:36.456 22:06:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.456 22:06:32 -- nvmf/common.sh@46 -- # : 0 00:06:36.456 22:06:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:36.456 22:06:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:36.456 22:06:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:36.456 22:06:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.456 22:06:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.456 22:06:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:36.456 22:06:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:36.456 22:06:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:36.456 22:06:32 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:36.456 22:06:32 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:36.456 22:06:32 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:36.456 22:06:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.456 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 22:06:32 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:36.456 22:06:32 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:36.456 22:06:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:36.456 22:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.456 22:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:36.456 ************************************ 00:06:36.456 START TEST nvmf_example 00:06:36.456 ************************************ 00:06:36.456 22:06:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:36.715 * Looking for test storage... 00:06:36.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:36.715 22:06:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:36.715 22:06:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:36.715 22:06:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:36.715 22:06:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:36.715 22:06:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:36.715 22:06:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:36.715 22:06:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:36.715 22:06:33 -- scripts/common.sh@335 -- # IFS=.-: 00:06:36.715 22:06:33 -- scripts/common.sh@335 -- # read -ra ver1 00:06:36.715 22:06:33 -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.715 22:06:33 -- scripts/common.sh@336 -- # read -ra ver2 00:06:36.715 22:06:33 -- scripts/common.sh@337 -- # local 'op=<' 00:06:36.715 22:06:33 -- scripts/common.sh@339 -- # ver1_l=2 00:06:36.715 22:06:33 -- scripts/common.sh@340 -- # ver2_l=1 00:06:36.715 22:06:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:36.715 22:06:33 -- scripts/common.sh@343 -- # case "$op" in 00:06:36.715 22:06:33 -- scripts/common.sh@344 -- # : 1 00:06:36.715 22:06:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:36.715 22:06:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.715 22:06:33 -- scripts/common.sh@364 -- # decimal 1 00:06:36.715 22:06:33 -- scripts/common.sh@352 -- # local d=1 00:06:36.715 22:06:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.715 22:06:33 -- scripts/common.sh@354 -- # echo 1 00:06:36.715 22:06:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:36.715 22:06:33 -- scripts/common.sh@365 -- # decimal 2 00:06:36.715 22:06:33 -- scripts/common.sh@352 -- # local d=2 00:06:36.715 22:06:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.715 22:06:33 -- scripts/common.sh@354 -- # echo 2 00:06:36.715 22:06:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:36.715 22:06:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:36.715 22:06:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:36.715 22:06:33 -- scripts/common.sh@367 -- # return 0 00:06:36.715 22:06:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.715 22:06:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:36.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.715 --rc genhtml_branch_coverage=1 00:06:36.715 --rc genhtml_function_coverage=1 00:06:36.715 --rc genhtml_legend=1 00:06:36.715 --rc geninfo_all_blocks=1 00:06:36.715 --rc geninfo_unexecuted_blocks=1 00:06:36.715 00:06:36.715 ' 00:06:36.715 22:06:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:36.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.715 --rc genhtml_branch_coverage=1 00:06:36.715 --rc genhtml_function_coverage=1 00:06:36.715 --rc genhtml_legend=1 00:06:36.715 --rc geninfo_all_blocks=1 00:06:36.715 --rc geninfo_unexecuted_blocks=1 00:06:36.715 00:06:36.715 ' 00:06:36.715 22:06:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:36.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.715 --rc genhtml_branch_coverage=1 00:06:36.715 --rc genhtml_function_coverage=1 00:06:36.715 --rc genhtml_legend=1 00:06:36.715 --rc geninfo_all_blocks=1 00:06:36.715 --rc geninfo_unexecuted_blocks=1 00:06:36.715 00:06:36.715 ' 00:06:36.715 22:06:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:36.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.715 --rc genhtml_branch_coverage=1 00:06:36.715 --rc genhtml_function_coverage=1 00:06:36.715 --rc genhtml_legend=1 00:06:36.715 --rc geninfo_all_blocks=1 00:06:36.715 --rc geninfo_unexecuted_blocks=1 00:06:36.715 00:06:36.715 ' 00:06:36.715 22:06:33 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:36.715 22:06:33 -- nvmf/common.sh@7 -- # uname -s 00:06:36.715 22:06:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.715 22:06:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.715 22:06:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.715 22:06:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.715 22:06:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.715 22:06:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.715 22:06:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.715 22:06:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.715 22:06:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.715 22:06:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.715 22:06:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:06:36.715 22:06:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:06:36.715 22:06:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.715 22:06:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.715 22:06:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:36.715 22:06:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.715 22:06:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.715 22:06:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.715 22:06:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.715 22:06:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.716 22:06:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.716 22:06:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.716 22:06:33 -- paths/export.sh@5 -- # export PATH 00:06:36.716 22:06:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.716 22:06:33 -- nvmf/common.sh@46 -- # : 0 00:06:36.716 22:06:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:36.716 22:06:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:36.716 22:06:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:36.716 22:06:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.716 22:06:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.716 22:06:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:36.716 22:06:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:36.716 22:06:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:36.716 22:06:33 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:36.716 22:06:33 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:36.716 22:06:33 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:36.716 22:06:33 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:36.716 22:06:33 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:36.716 22:06:33 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:36.716 22:06:33 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:36.716 22:06:33 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:36.716 22:06:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.716 22:06:33 -- common/autotest_common.sh@10 -- # set +x 00:06:36.716 22:06:33 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:36.716 22:06:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:36.716 22:06:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.716 22:06:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:36.716 22:06:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:36.716 22:06:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:36.716 22:06:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.716 22:06:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.716 22:06:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.716 22:06:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:36.716 22:06:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:36.716 22:06:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:36.716 22:06:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:36.716 22:06:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:36.716 22:06:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:36.716 22:06:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.716 22:06:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:36.716 22:06:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:36.716 22:06:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:36.716 22:06:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:36.716 22:06:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:36.716 22:06:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:36.716 22:06:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.716 22:06:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:36.716 22:06:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:36.716 22:06:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:36.716 22:06:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:36.716 22:06:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:36.716 Cannot find device "nvmf_init_br" 00:06:36.716 22:06:33 -- nvmf/common.sh@153 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:36.716 Cannot find device "nvmf_tgt_br" 00:06:36.716 22:06:33 -- nvmf/common.sh@154 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:36.716 Cannot find device "nvmf_tgt_br2" 00:06:36.716 22:06:33 -- nvmf/common.sh@155 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:36.716 Cannot find device "nvmf_init_br" 00:06:36.716 22:06:33 -- nvmf/common.sh@156 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:36.716 Cannot find device "nvmf_tgt_br" 00:06:36.716 22:06:33 -- nvmf/common.sh@157 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:36.716 Cannot find device "nvmf_tgt_br2" 00:06:36.716 22:06:33 -- nvmf/common.sh@158 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:36.716 Cannot find device "nvmf_br" 00:06:36.716 22:06:33 -- nvmf/common.sh@159 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:36.716 Cannot find device "nvmf_init_if" 00:06:36.716 22:06:33 -- nvmf/common.sh@160 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:36.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:36.716 22:06:33 -- nvmf/common.sh@161 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:36.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:36.716 22:06:33 -- nvmf/common.sh@162 -- # true 00:06:36.716 22:06:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:36.975 22:06:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:36.975 22:06:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:36.975 22:06:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:36.975 22:06:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:36.975 22:06:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:36.975 22:06:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:36.975 22:06:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:36.975 22:06:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:36.975 22:06:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:36.975 22:06:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:36.975 22:06:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:36.975 22:06:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:36.975 22:06:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:36.975 22:06:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:36.975 22:06:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:36.975 22:06:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:36.975 22:06:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:36.975 22:06:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:36.975 22:06:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:36.975 22:06:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:36.975 22:06:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:37.234 22:06:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:37.234 22:06:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:37.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:06:37.234 00:06:37.234 --- 10.0.0.2 ping statistics --- 00:06:37.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.234 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:06:37.234 22:06:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:37.234 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:37.234 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:06:37.234 00:06:37.234 --- 10.0.0.3 ping statistics --- 00:06:37.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.234 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:06:37.234 22:06:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:37.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:06:37.234 00:06:37.234 --- 10.0.0.1 ping statistics --- 00:06:37.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.234 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:06:37.234 22:06:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.234 22:06:33 -- nvmf/common.sh@421 -- # return 0 00:06:37.234 22:06:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:37.234 22:06:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.234 22:06:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:37.234 22:06:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:37.234 22:06:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.234 22:06:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:37.234 22:06:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:37.234 22:06:33 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:37.234 22:06:33 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:37.234 22:06:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:37.234 22:06:33 -- common/autotest_common.sh@10 -- # set +x 00:06:37.234 22:06:33 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:37.234 22:06:33 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:37.234 22:06:33 -- target/nvmf_example.sh@34 -- # nvmfpid=60166 00:06:37.234 22:06:33 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:37.234 22:06:33 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:37.234 22:06:33 -- target/nvmf_example.sh@36 -- # waitforlisten 60166 00:06:37.234 22:06:33 -- common/autotest_common.sh@829 -- # '[' -z 60166 ']' 00:06:37.234 22:06:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.234 22:06:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.234 22:06:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.234 22:06:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.234 22:06:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.168 22:06:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.168 22:06:34 -- common/autotest_common.sh@862 -- # return 0 00:06:38.168 22:06:34 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:38.168 22:06:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:38.168 22:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.168 22:06:34 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:38.168 22:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.168 22:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.168 22:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.168 22:06:34 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:38.168 22:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.168 22:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.168 22:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.168 22:06:34 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:38.168 22:06:34 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:38.168 22:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.168 22:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.168 22:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.168 22:06:34 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:38.168 22:06:34 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:38.168 22:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.168 22:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.168 22:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.168 22:06:34 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:38.168 22:06:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.169 22:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.169 22:06:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.169 22:06:34 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:38.169 22:06:34 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:50.378 Initializing NVMe Controllers 00:06:50.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:50.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:50.378 Initialization complete. Launching workers. 00:06:50.378 ======================================================== 00:06:50.378 Latency(us) 00:06:50.378 Device Information : IOPS MiB/s Average min max 00:06:50.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16990.09 66.37 3766.40 659.84 22122.69 00:06:50.378 ======================================================== 00:06:50.378 Total : 16990.09 66.37 3766.40 659.84 22122.69 00:06:50.378 00:06:50.378 22:06:44 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:50.378 22:06:44 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:50.378 22:06:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:50.378 22:06:44 -- nvmf/common.sh@116 -- # sync 00:06:50.378 22:06:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:50.378 22:06:45 -- nvmf/common.sh@119 -- # set +e 00:06:50.378 22:06:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:50.378 22:06:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:50.378 rmmod nvme_tcp 00:06:50.378 rmmod nvme_fabrics 00:06:50.378 rmmod nvme_keyring 00:06:50.378 22:06:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:50.378 22:06:45 -- nvmf/common.sh@123 -- # set -e 00:06:50.378 22:06:45 -- nvmf/common.sh@124 -- # return 0 00:06:50.378 22:06:45 -- nvmf/common.sh@477 -- # '[' -n 60166 ']' 00:06:50.378 22:06:45 -- nvmf/common.sh@478 -- # killprocess 60166 00:06:50.378 22:06:45 -- common/autotest_common.sh@936 -- # '[' -z 60166 ']' 00:06:50.378 22:06:45 -- common/autotest_common.sh@940 -- # kill -0 60166 00:06:50.378 22:06:45 -- common/autotest_common.sh@941 -- # uname 00:06:50.378 22:06:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.378 22:06:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60166 00:06:50.378 killing process with pid 60166 00:06:50.378 22:06:45 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:50.378 22:06:45 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:50.378 22:06:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60166' 00:06:50.378 22:06:45 -- common/autotest_common.sh@955 -- # kill 60166 00:06:50.378 22:06:45 -- common/autotest_common.sh@960 -- # wait 60166 00:06:50.378 nvmf threads initialize successfully 00:06:50.378 bdev subsystem init successfully 00:06:50.378 created a nvmf target service 00:06:50.378 create targets's poll groups done 00:06:50.378 all subsystems of target started 00:06:50.378 nvmf target is running 00:06:50.378 all subsystems of target stopped 00:06:50.378 destroy targets's poll groups done 00:06:50.378 destroyed the nvmf target service 00:06:50.378 bdev subsystem finish successfully 00:06:50.378 nvmf threads destroy successfully 00:06:50.379 22:06:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:50.379 22:06:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:50.379 22:06:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:50.379 22:06:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:50.379 22:06:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:50.379 22:06:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.379 22:06:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.379 22:06:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.379 22:06:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:06:50.379 22:06:45 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:50.379 22:06:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.379 22:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.379 00:06:50.379 real 0m12.510s 00:06:50.379 user 0m44.568s 00:06:50.379 sys 0m1.974s 00:06:50.379 22:06:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.379 22:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.379 ************************************ 00:06:50.379 END TEST nvmf_example 00:06:50.379 ************************************ 00:06:50.379 22:06:45 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:50.379 22:06:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:50.379 22:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.379 22:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.379 ************************************ 00:06:50.379 START TEST nvmf_filesystem 00:06:50.379 ************************************ 00:06:50.379 22:06:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:50.379 * Looking for test storage... 00:06:50.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.379 22:06:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:50.379 22:06:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:50.379 22:06:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:50.379 22:06:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:50.379 22:06:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:50.379 22:06:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:50.379 22:06:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:50.379 22:06:45 -- scripts/common.sh@335 -- # IFS=.-: 00:06:50.379 22:06:45 -- scripts/common.sh@335 -- # read -ra ver1 00:06:50.379 22:06:45 -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.379 22:06:45 -- scripts/common.sh@336 -- # read -ra ver2 00:06:50.379 22:06:45 -- scripts/common.sh@337 -- # local 'op=<' 00:06:50.379 22:06:45 -- scripts/common.sh@339 -- # ver1_l=2 00:06:50.379 22:06:45 -- scripts/common.sh@340 -- # ver2_l=1 00:06:50.379 22:06:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:50.379 22:06:45 -- scripts/common.sh@343 -- # case "$op" in 00:06:50.379 22:06:45 -- scripts/common.sh@344 -- # : 1 00:06:50.379 22:06:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:50.379 22:06:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.379 22:06:45 -- scripts/common.sh@364 -- # decimal 1 00:06:50.379 22:06:45 -- scripts/common.sh@352 -- # local d=1 00:06:50.379 22:06:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.379 22:06:45 -- scripts/common.sh@354 -- # echo 1 00:06:50.379 22:06:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:50.379 22:06:45 -- scripts/common.sh@365 -- # decimal 2 00:06:50.379 22:06:45 -- scripts/common.sh@352 -- # local d=2 00:06:50.379 22:06:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.379 22:06:45 -- scripts/common.sh@354 -- # echo 2 00:06:50.379 22:06:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:50.379 22:06:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:50.379 22:06:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:50.379 22:06:45 -- scripts/common.sh@367 -- # return 0 00:06:50.379 22:06:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.379 22:06:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:50.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.379 --rc genhtml_branch_coverage=1 00:06:50.379 --rc genhtml_function_coverage=1 00:06:50.379 --rc genhtml_legend=1 00:06:50.379 --rc geninfo_all_blocks=1 00:06:50.379 --rc geninfo_unexecuted_blocks=1 00:06:50.379 00:06:50.379 ' 00:06:50.379 22:06:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:50.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.379 --rc genhtml_branch_coverage=1 00:06:50.379 --rc genhtml_function_coverage=1 00:06:50.379 --rc genhtml_legend=1 00:06:50.379 --rc geninfo_all_blocks=1 00:06:50.379 --rc geninfo_unexecuted_blocks=1 00:06:50.379 00:06:50.379 ' 00:06:50.379 22:06:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:50.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.379 --rc genhtml_branch_coverage=1 00:06:50.379 --rc genhtml_function_coverage=1 00:06:50.379 --rc genhtml_legend=1 00:06:50.379 --rc geninfo_all_blocks=1 00:06:50.379 --rc geninfo_unexecuted_blocks=1 00:06:50.379 00:06:50.379 ' 00:06:50.379 22:06:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:50.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.379 --rc genhtml_branch_coverage=1 00:06:50.379 --rc genhtml_function_coverage=1 00:06:50.379 --rc genhtml_legend=1 00:06:50.379 --rc geninfo_all_blocks=1 00:06:50.379 --rc geninfo_unexecuted_blocks=1 00:06:50.379 00:06:50.379 ' 00:06:50.379 22:06:45 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:50.379 22:06:45 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:50.379 22:06:45 -- common/autotest_common.sh@34 -- # set -e 00:06:50.379 22:06:45 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:50.379 22:06:45 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:50.379 22:06:45 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:50.379 22:06:45 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:50.379 22:06:45 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:50.379 22:06:45 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:50.379 22:06:45 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:50.379 22:06:45 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:50.379 22:06:45 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:50.379 22:06:45 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:50.379 22:06:45 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:50.379 22:06:45 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:50.379 22:06:45 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:50.379 22:06:45 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:50.379 22:06:45 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:50.379 22:06:45 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:50.379 22:06:45 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:50.379 22:06:45 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:50.379 22:06:45 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:50.379 22:06:45 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:50.379 22:06:45 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:50.379 22:06:45 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:50.379 22:06:45 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:50.379 22:06:45 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:50.379 22:06:45 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:50.379 22:06:45 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:50.379 22:06:45 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:50.379 22:06:45 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:50.379 22:06:45 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:50.379 22:06:45 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:50.379 22:06:45 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:50.379 22:06:45 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:50.379 22:06:45 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:50.379 22:06:45 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:50.379 22:06:45 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:50.379 22:06:45 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:50.379 22:06:45 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:50.379 22:06:45 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:50.379 22:06:45 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:50.379 22:06:45 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:50.380 22:06:45 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:50.380 22:06:45 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:50.380 22:06:45 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:50.380 22:06:45 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:50.380 22:06:45 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:50.380 22:06:45 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:50.380 22:06:45 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:50.380 22:06:45 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:50.380 22:06:45 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:50.380 22:06:45 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:50.380 22:06:45 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:50.380 22:06:45 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:50.380 22:06:45 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:50.380 22:06:45 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:50.380 22:06:45 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:50.380 22:06:45 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:50.380 22:06:45 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:50.380 22:06:45 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:50.380 22:06:45 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:50.380 22:06:45 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:50.380 22:06:45 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:50.380 22:06:45 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:06:50.380 22:06:45 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:50.380 22:06:45 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:50.380 22:06:45 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:50.380 22:06:45 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:50.380 22:06:45 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:50.380 22:06:45 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:50.380 22:06:45 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:50.380 22:06:45 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:50.380 22:06:45 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:50.380 22:06:45 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:06:50.380 22:06:45 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:50.380 22:06:45 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:50.380 22:06:45 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:50.380 22:06:45 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:50.380 22:06:45 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:50.380 22:06:45 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:50.380 22:06:45 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:50.380 22:06:45 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:50.380 22:06:45 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:50.380 22:06:45 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:50.380 22:06:45 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:50.380 22:06:45 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:50.380 22:06:45 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:50.380 22:06:45 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:50.380 22:06:45 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:50.380 22:06:45 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:50.380 22:06:45 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:50.380 22:06:45 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:50.380 22:06:45 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:50.380 22:06:45 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:50.380 22:06:45 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:50.380 22:06:45 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:50.380 22:06:45 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:50.380 22:06:45 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:50.380 22:06:45 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:50.380 22:06:45 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:50.380 22:06:45 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:50.380 #define SPDK_CONFIG_H 00:06:50.380 #define SPDK_CONFIG_APPS 1 00:06:50.380 #define SPDK_CONFIG_ARCH native 00:06:50.380 #undef SPDK_CONFIG_ASAN 00:06:50.380 #define SPDK_CONFIG_AVAHI 1 00:06:50.380 #undef SPDK_CONFIG_CET 00:06:50.380 #define SPDK_CONFIG_COVERAGE 1 00:06:50.380 #define SPDK_CONFIG_CROSS_PREFIX 00:06:50.380 #undef SPDK_CONFIG_CRYPTO 00:06:50.380 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:50.380 #undef SPDK_CONFIG_CUSTOMOCF 00:06:50.380 #undef SPDK_CONFIG_DAOS 00:06:50.380 #define SPDK_CONFIG_DAOS_DIR 00:06:50.380 #define SPDK_CONFIG_DEBUG 1 00:06:50.380 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:50.380 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:50.380 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:50.380 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:50.380 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:50.380 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:50.380 #define SPDK_CONFIG_EXAMPLES 1 00:06:50.380 #undef SPDK_CONFIG_FC 00:06:50.380 #define SPDK_CONFIG_FC_PATH 00:06:50.380 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:50.380 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:50.380 #undef SPDK_CONFIG_FUSE 00:06:50.380 #undef SPDK_CONFIG_FUZZER 00:06:50.380 #define SPDK_CONFIG_FUZZER_LIB 00:06:50.380 #define SPDK_CONFIG_GOLANG 1 00:06:50.380 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:50.380 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:50.380 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:50.380 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:50.380 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:50.380 #define SPDK_CONFIG_IDXD 1 00:06:50.380 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:50.380 #undef SPDK_CONFIG_IPSEC_MB 00:06:50.380 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:50.380 #define SPDK_CONFIG_ISAL 1 00:06:50.380 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:50.380 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:50.380 #define SPDK_CONFIG_LIBDIR 00:06:50.380 #undef SPDK_CONFIG_LTO 00:06:50.380 #define SPDK_CONFIG_MAX_LCORES 00:06:50.380 #define SPDK_CONFIG_NVME_CUSE 1 00:06:50.380 #undef SPDK_CONFIG_OCF 00:06:50.380 #define SPDK_CONFIG_OCF_PATH 00:06:50.380 #define SPDK_CONFIG_OPENSSL_PATH 00:06:50.380 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:50.380 #undef SPDK_CONFIG_PGO_USE 00:06:50.380 #define SPDK_CONFIG_PREFIX /usr/local 00:06:50.380 #undef SPDK_CONFIG_RAID5F 00:06:50.380 #undef SPDK_CONFIG_RBD 00:06:50.380 #define SPDK_CONFIG_RDMA 1 00:06:50.380 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:50.380 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:50.380 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:50.380 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:50.380 #define SPDK_CONFIG_SHARED 1 00:06:50.380 #undef SPDK_CONFIG_SMA 00:06:50.380 #define SPDK_CONFIG_TESTS 1 00:06:50.380 #undef SPDK_CONFIG_TSAN 00:06:50.380 #define SPDK_CONFIG_UBLK 1 00:06:50.380 #define SPDK_CONFIG_UBSAN 1 00:06:50.380 #undef SPDK_CONFIG_UNIT_TESTS 00:06:50.380 #undef SPDK_CONFIG_URING 00:06:50.380 #define SPDK_CONFIG_URING_PATH 00:06:50.380 #undef SPDK_CONFIG_URING_ZNS 00:06:50.380 #define SPDK_CONFIG_USDT 1 00:06:50.380 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:50.380 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:50.380 #define SPDK_CONFIG_VFIO_USER 1 00:06:50.380 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:50.380 #define SPDK_CONFIG_VHOST 1 00:06:50.381 #define SPDK_CONFIG_VIRTIO 1 00:06:50.381 #undef SPDK_CONFIG_VTUNE 00:06:50.381 #define SPDK_CONFIG_VTUNE_DIR 00:06:50.381 #define SPDK_CONFIG_WERROR 1 00:06:50.381 #define SPDK_CONFIG_WPDK_DIR 00:06:50.381 #undef SPDK_CONFIG_XNVME 00:06:50.381 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:50.381 22:06:45 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:50.381 22:06:45 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.381 22:06:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.381 22:06:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.381 22:06:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.381 22:06:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.381 22:06:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.381 22:06:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.381 22:06:45 -- paths/export.sh@5 -- # export PATH 00:06:50.381 22:06:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.381 22:06:45 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:50.381 22:06:45 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:50.381 22:06:45 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:50.381 22:06:45 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:50.381 22:06:45 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:50.381 22:06:45 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:50.381 22:06:45 -- pm/common@16 -- # TEST_TAG=N/A 00:06:50.381 22:06:45 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:50.381 22:06:45 -- common/autotest_common.sh@52 -- # : 1 00:06:50.381 22:06:45 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:50.381 22:06:45 -- common/autotest_common.sh@56 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:50.381 22:06:45 -- common/autotest_common.sh@58 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:50.381 22:06:45 -- common/autotest_common.sh@60 -- # : 1 00:06:50.381 22:06:45 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:50.381 22:06:45 -- common/autotest_common.sh@62 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:50.381 22:06:45 -- common/autotest_common.sh@64 -- # : 00:06:50.381 22:06:45 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:50.381 22:06:45 -- common/autotest_common.sh@66 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:50.381 22:06:45 -- common/autotest_common.sh@68 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:50.381 22:06:45 -- common/autotest_common.sh@70 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:50.381 22:06:45 -- common/autotest_common.sh@72 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:50.381 22:06:45 -- common/autotest_common.sh@74 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:50.381 22:06:45 -- common/autotest_common.sh@76 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:50.381 22:06:45 -- common/autotest_common.sh@78 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:50.381 22:06:45 -- common/autotest_common.sh@80 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:50.381 22:06:45 -- common/autotest_common.sh@82 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:50.381 22:06:45 -- common/autotest_common.sh@84 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:50.381 22:06:45 -- common/autotest_common.sh@86 -- # : 1 00:06:50.381 22:06:45 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:50.381 22:06:45 -- common/autotest_common.sh@88 -- # : 1 00:06:50.381 22:06:45 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:50.381 22:06:45 -- common/autotest_common.sh@90 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:50.381 22:06:45 -- common/autotest_common.sh@92 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:50.381 22:06:45 -- common/autotest_common.sh@94 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:50.381 22:06:45 -- common/autotest_common.sh@96 -- # : tcp 00:06:50.381 22:06:45 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:50.381 22:06:45 -- common/autotest_common.sh@98 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:50.381 22:06:45 -- common/autotest_common.sh@100 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:50.381 22:06:45 -- common/autotest_common.sh@102 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:50.381 22:06:45 -- common/autotest_common.sh@104 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:50.381 22:06:45 -- common/autotest_common.sh@106 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:50.381 22:06:45 -- common/autotest_common.sh@108 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:50.381 22:06:45 -- common/autotest_common.sh@110 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:50.381 22:06:45 -- common/autotest_common.sh@112 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:50.381 22:06:45 -- common/autotest_common.sh@114 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:50.381 22:06:45 -- common/autotest_common.sh@116 -- # : 1 00:06:50.381 22:06:45 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:50.381 22:06:45 -- common/autotest_common.sh@118 -- # : 00:06:50.381 22:06:45 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:50.381 22:06:45 -- common/autotest_common.sh@120 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:50.381 22:06:45 -- common/autotest_common.sh@122 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:50.381 22:06:45 -- common/autotest_common.sh@124 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:50.381 22:06:45 -- common/autotest_common.sh@126 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:50.381 22:06:45 -- common/autotest_common.sh@128 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:50.381 22:06:45 -- common/autotest_common.sh@130 -- # : 0 00:06:50.381 22:06:45 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:50.382 22:06:45 -- common/autotest_common.sh@132 -- # : 00:06:50.382 22:06:45 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:50.382 22:06:45 -- common/autotest_common.sh@134 -- # : true 00:06:50.382 22:06:45 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:50.382 22:06:45 -- common/autotest_common.sh@136 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:50.382 22:06:45 -- common/autotest_common.sh@138 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:50.382 22:06:45 -- common/autotest_common.sh@140 -- # : 1 00:06:50.382 22:06:45 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:50.382 22:06:45 -- common/autotest_common.sh@142 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:50.382 22:06:45 -- common/autotest_common.sh@144 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:50.382 22:06:45 -- common/autotest_common.sh@146 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:50.382 22:06:45 -- common/autotest_common.sh@148 -- # : 00:06:50.382 22:06:45 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:50.382 22:06:45 -- common/autotest_common.sh@150 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:50.382 22:06:45 -- common/autotest_common.sh@152 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:50.382 22:06:45 -- common/autotest_common.sh@154 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:50.382 22:06:45 -- common/autotest_common.sh@156 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:50.382 22:06:45 -- common/autotest_common.sh@158 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:50.382 22:06:45 -- common/autotest_common.sh@160 -- # : 0 00:06:50.382 22:06:45 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:50.382 22:06:45 -- common/autotest_common.sh@163 -- # : 00:06:50.382 22:06:45 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:50.382 22:06:45 -- common/autotest_common.sh@165 -- # : 1 00:06:50.382 22:06:45 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:50.382 22:06:45 -- common/autotest_common.sh@167 -- # : 1 00:06:50.382 22:06:45 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:50.382 22:06:45 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:50.382 22:06:45 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:50.382 22:06:45 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:50.382 22:06:45 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:50.382 22:06:45 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:50.382 22:06:45 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:50.382 22:06:45 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:50.382 22:06:45 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:50.382 22:06:45 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:50.382 22:06:45 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:50.382 22:06:45 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:50.382 22:06:45 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:50.382 22:06:45 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:50.382 22:06:45 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:50.382 22:06:45 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:50.382 22:06:45 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:50.382 22:06:45 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:50.382 22:06:45 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:50.382 22:06:45 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:50.382 22:06:45 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:50.382 22:06:45 -- common/autotest_common.sh@196 -- # cat 00:06:50.382 22:06:45 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:50.382 22:06:45 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:50.382 22:06:45 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:50.382 22:06:45 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:50.382 22:06:45 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:50.382 22:06:45 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:50.382 22:06:45 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:50.382 22:06:45 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:50.382 22:06:45 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:50.382 22:06:45 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:50.382 22:06:45 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:50.382 22:06:45 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:50.382 22:06:45 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:50.382 22:06:45 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:50.382 22:06:45 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:50.382 22:06:45 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:50.382 22:06:45 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:50.382 22:06:45 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:50.382 22:06:45 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:50.382 22:06:45 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:06:50.382 22:06:45 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:06:50.382 22:06:45 -- common/autotest_common.sh@249 -- # _LCOV= 00:06:50.382 22:06:45 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:06:50.382 22:06:45 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:06:50.382 22:06:45 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:50.382 22:06:45 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:06:50.382 22:06:45 -- common/autotest_common.sh@255 -- # lcov_opt= 00:06:50.382 22:06:45 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:06:50.382 22:06:45 -- common/autotest_common.sh@259 -- # export valgrind= 00:06:50.382 22:06:45 -- common/autotest_common.sh@259 -- # valgrind= 00:06:50.382 22:06:45 -- common/autotest_common.sh@265 -- # uname -s 00:06:50.382 22:06:45 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:06:50.382 22:06:45 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:06:50.382 22:06:45 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:06:50.382 22:06:45 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:06:50.382 22:06:45 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:06:50.382 22:06:45 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:06:50.382 22:06:45 -- common/autotest_common.sh@275 -- # MAKE=make 00:06:50.382 22:06:45 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:06:50.382 22:06:45 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:06:50.382 22:06:45 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:06:50.382 22:06:45 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:50.382 22:06:45 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:06:50.382 22:06:45 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:06:50.382 22:06:45 -- common/autotest_common.sh@301 -- # for i in "$@" 00:06:50.382 22:06:45 -- common/autotest_common.sh@302 -- # case "$i" in 00:06:50.382 22:06:45 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:06:50.382 22:06:45 -- common/autotest_common.sh@319 -- # [[ -z 60404 ]] 00:06:50.382 22:06:45 -- common/autotest_common.sh@319 -- # kill -0 60404 00:06:50.383 22:06:45 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:06:50.383 22:06:45 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:06:50.383 22:06:45 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:06:50.383 22:06:45 -- common/autotest_common.sh@332 -- # local mount target_dir 00:06:50.383 22:06:45 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:06:50.383 22:06:45 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:06:50.383 22:06:45 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:06:50.383 22:06:45 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:06:50.383 22:06:45 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.KOcmHW 00:06:50.383 22:06:45 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:50.383 22:06:45 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:06:50.383 22:06:45 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:06:50.383 22:06:45 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.KOcmHW/tests/target /tmp/spdk.KOcmHW 00:06:50.383 22:06:45 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@328 -- # df -T 00:06:50.383 22:06:45 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=14016229376 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=5551493120 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265171968 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=14016229376 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=5551493120 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266294272 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=135168 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253273600 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253285888 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:06:50.383 22:06:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=98018406400 00:06:50.383 22:06:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:06:50.383 22:06:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=1684373504 00:06:50.383 22:06:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:06:50.383 22:06:45 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:06:50.383 * Looking for test storage... 00:06:50.383 22:06:45 -- common/autotest_common.sh@369 -- # local target_space new_size 00:06:50.383 22:06:45 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:06:50.383 22:06:45 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.383 22:06:45 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:50.383 22:06:45 -- common/autotest_common.sh@373 -- # mount=/home 00:06:50.383 22:06:45 -- common/autotest_common.sh@375 -- # target_space=14016229376 00:06:50.383 22:06:45 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:06:50.383 22:06:45 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:06:50.383 22:06:45 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:06:50.383 22:06:45 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:06:50.383 22:06:45 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:06:50.383 22:06:45 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.383 22:06:45 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.383 22:06:45 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.383 22:06:45 -- common/autotest_common.sh@390 -- # return 0 00:06:50.383 22:06:45 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:06:50.383 22:06:45 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:06:50.383 22:06:45 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:50.383 22:06:45 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:50.383 22:06:45 -- common/autotest_common.sh@1682 -- # true 00:06:50.383 22:06:45 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:06:50.383 22:06:45 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:50.383 22:06:45 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:50.383 22:06:45 -- common/autotest_common.sh@27 -- # exec 00:06:50.383 22:06:45 -- common/autotest_common.sh@29 -- # exec 00:06:50.383 22:06:45 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:50.383 22:06:45 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:50.383 22:06:45 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:50.383 22:06:45 -- common/autotest_common.sh@18 -- # set -x 00:06:50.383 22:06:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:50.383 22:06:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:50.383 22:06:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:50.384 22:06:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:50.384 22:06:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:50.384 22:06:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:50.384 22:06:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:50.384 22:06:45 -- scripts/common.sh@335 -- # IFS=.-: 00:06:50.384 22:06:45 -- scripts/common.sh@335 -- # read -ra ver1 00:06:50.384 22:06:45 -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.384 22:06:45 -- scripts/common.sh@336 -- # read -ra ver2 00:06:50.384 22:06:45 -- scripts/common.sh@337 -- # local 'op=<' 00:06:50.384 22:06:45 -- scripts/common.sh@339 -- # ver1_l=2 00:06:50.384 22:06:45 -- scripts/common.sh@340 -- # ver2_l=1 00:06:50.384 22:06:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:50.384 22:06:45 -- scripts/common.sh@343 -- # case "$op" in 00:06:50.384 22:06:45 -- scripts/common.sh@344 -- # : 1 00:06:50.384 22:06:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:50.384 22:06:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.384 22:06:45 -- scripts/common.sh@364 -- # decimal 1 00:06:50.384 22:06:45 -- scripts/common.sh@352 -- # local d=1 00:06:50.384 22:06:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.384 22:06:45 -- scripts/common.sh@354 -- # echo 1 00:06:50.384 22:06:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:50.384 22:06:45 -- scripts/common.sh@365 -- # decimal 2 00:06:50.384 22:06:45 -- scripts/common.sh@352 -- # local d=2 00:06:50.384 22:06:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.384 22:06:45 -- scripts/common.sh@354 -- # echo 2 00:06:50.384 22:06:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:50.384 22:06:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:50.384 22:06:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:50.384 22:06:45 -- scripts/common.sh@367 -- # return 0 00:06:50.384 22:06:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.384 22:06:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:50.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.384 --rc genhtml_branch_coverage=1 00:06:50.384 --rc genhtml_function_coverage=1 00:06:50.384 --rc genhtml_legend=1 00:06:50.384 --rc geninfo_all_blocks=1 00:06:50.384 --rc geninfo_unexecuted_blocks=1 00:06:50.384 00:06:50.384 ' 00:06:50.384 22:06:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:50.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.384 --rc genhtml_branch_coverage=1 00:06:50.384 --rc genhtml_function_coverage=1 00:06:50.384 --rc genhtml_legend=1 00:06:50.384 --rc geninfo_all_blocks=1 00:06:50.384 --rc geninfo_unexecuted_blocks=1 00:06:50.384 00:06:50.384 ' 00:06:50.384 22:06:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:50.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.384 --rc genhtml_branch_coverage=1 00:06:50.384 --rc genhtml_function_coverage=1 00:06:50.384 --rc genhtml_legend=1 00:06:50.384 --rc geninfo_all_blocks=1 00:06:50.384 --rc geninfo_unexecuted_blocks=1 00:06:50.384 00:06:50.384 ' 00:06:50.384 22:06:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:50.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.384 --rc genhtml_branch_coverage=1 00:06:50.384 --rc genhtml_function_coverage=1 00:06:50.384 --rc genhtml_legend=1 00:06:50.384 --rc geninfo_all_blocks=1 00:06:50.384 --rc geninfo_unexecuted_blocks=1 00:06:50.384 00:06:50.384 ' 00:06:50.384 22:06:45 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:50.384 22:06:45 -- nvmf/common.sh@7 -- # uname -s 00:06:50.384 22:06:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.384 22:06:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.384 22:06:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.384 22:06:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.384 22:06:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.384 22:06:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.384 22:06:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.384 22:06:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.384 22:06:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.384 22:06:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.384 22:06:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:06:50.384 22:06:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:06:50.384 22:06:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.384 22:06:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.384 22:06:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:50.384 22:06:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.384 22:06:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.384 22:06:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.384 22:06:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.384 22:06:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.384 22:06:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.384 22:06:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.384 22:06:45 -- paths/export.sh@5 -- # export PATH 00:06:50.384 22:06:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.384 22:06:45 -- nvmf/common.sh@46 -- # : 0 00:06:50.384 22:06:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:50.385 22:06:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:50.385 22:06:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:50.385 22:06:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.385 22:06:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.385 22:06:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:50.385 22:06:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:50.385 22:06:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:50.385 22:06:45 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:50.385 22:06:45 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:50.385 22:06:45 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:50.385 22:06:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:50.385 22:06:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.385 22:06:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:50.385 22:06:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:50.385 22:06:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:50.385 22:06:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.385 22:06:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.385 22:06:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.385 22:06:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:06:50.385 22:06:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:06:50.385 22:06:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:06:50.385 22:06:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:06:50.385 22:06:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:06:50.385 22:06:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:06:50.385 22:06:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.385 22:06:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.385 22:06:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:50.385 22:06:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:06:50.385 22:06:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:50.385 22:06:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:50.385 22:06:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:50.385 22:06:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.385 22:06:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:50.385 22:06:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:50.385 22:06:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:50.385 22:06:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:50.385 22:06:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:06:50.385 22:06:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:06:50.385 Cannot find device "nvmf_tgt_br" 00:06:50.385 22:06:45 -- nvmf/common.sh@154 -- # true 00:06:50.385 22:06:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:06:50.385 Cannot find device "nvmf_tgt_br2" 00:06:50.385 22:06:45 -- nvmf/common.sh@155 -- # true 00:06:50.385 22:06:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:06:50.385 22:06:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:06:50.385 Cannot find device "nvmf_tgt_br" 00:06:50.385 22:06:46 -- nvmf/common.sh@157 -- # true 00:06:50.385 22:06:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:06:50.385 Cannot find device "nvmf_tgt_br2" 00:06:50.385 22:06:46 -- nvmf/common.sh@158 -- # true 00:06:50.385 22:06:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:06:50.385 22:06:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:06:50.385 22:06:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:50.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.385 22:06:46 -- nvmf/common.sh@161 -- # true 00:06:50.385 22:06:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:50.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.385 22:06:46 -- nvmf/common.sh@162 -- # true 00:06:50.385 22:06:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:06:50.385 22:06:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:50.385 22:06:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:50.385 22:06:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:50.385 22:06:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:50.385 22:06:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:50.385 22:06:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:50.385 22:06:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:50.385 22:06:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:50.385 22:06:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:06:50.385 22:06:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:06:50.385 22:06:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:06:50.385 22:06:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:06:50.385 22:06:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:50.385 22:06:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:50.385 22:06:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:50.385 22:06:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:06:50.385 22:06:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:06:50.385 22:06:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:06:50.385 22:06:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:50.385 22:06:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:50.385 22:06:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:50.385 22:06:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:50.385 22:06:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:06:50.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:06:50.385 00:06:50.385 --- 10.0.0.2 ping statistics --- 00:06:50.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.385 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:06:50.385 22:06:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:06:50.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:50.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:06:50.385 00:06:50.385 --- 10.0.0.3 ping statistics --- 00:06:50.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.385 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:06:50.385 22:06:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:50.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:06:50.385 00:06:50.385 --- 10.0.0.1 ping statistics --- 00:06:50.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.385 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:50.385 22:06:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.385 22:06:46 -- nvmf/common.sh@421 -- # return 0 00:06:50.385 22:06:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:50.385 22:06:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.385 22:06:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:50.385 22:06:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:50.385 22:06:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.385 22:06:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:50.385 22:06:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:50.385 22:06:46 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:50.385 22:06:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:50.385 22:06:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.385 22:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:50.385 ************************************ 00:06:50.385 START TEST nvmf_filesystem_no_in_capsule 00:06:50.385 ************************************ 00:06:50.385 22:06:46 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:06:50.385 22:06:46 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:50.385 22:06:46 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:50.385 22:06:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:50.385 22:06:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:50.385 22:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:50.385 22:06:46 -- nvmf/common.sh@469 -- # nvmfpid=60579 00:06:50.386 22:06:46 -- nvmf/common.sh@470 -- # waitforlisten 60579 00:06:50.386 22:06:46 -- common/autotest_common.sh@829 -- # '[' -z 60579 ']' 00:06:50.386 22:06:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.386 22:06:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:50.386 22:06:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.386 22:06:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.386 22:06:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.386 22:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:50.386 [2024-11-17 22:06:46.354413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.386 [2024-11-17 22:06:46.354490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.386 [2024-11-17 22:06:46.480969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.386 [2024-11-17 22:06:46.575095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.386 [2024-11-17 22:06:46.575256] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.386 [2024-11-17 22:06:46.575269] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.386 [2024-11-17 22:06:46.575277] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.386 [2024-11-17 22:06:46.576165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.386 [2024-11-17 22:06:46.576349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.386 [2024-11-17 22:06:46.576495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.386 [2024-11-17 22:06:46.576497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.954 22:06:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.954 22:06:47 -- common/autotest_common.sh@862 -- # return 0 00:06:50.954 22:06:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:50.954 22:06:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.954 22:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:50.954 22:06:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.954 22:06:47 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:50.954 22:06:47 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:50.954 22:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.954 22:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:50.954 [2024-11-17 22:06:47.427471] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.954 22:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.954 22:06:47 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:50.954 22:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.954 22:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:51.213 Malloc1 00:06:51.213 22:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.213 22:06:47 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:51.213 22:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.213 22:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:51.213 22:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.213 22:06:47 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:51.213 22:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.213 22:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:51.213 22:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.213 22:06:47 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.213 22:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.213 22:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:51.213 [2024-11-17 22:06:47.693011] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.213 22:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.213 22:06:47 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:51.213 22:06:47 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:06:51.213 22:06:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:06:51.213 22:06:47 -- common/autotest_common.sh@1369 -- # local bs 00:06:51.213 22:06:47 -- common/autotest_common.sh@1370 -- # local nb 00:06:51.213 22:06:47 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:51.213 22:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.213 22:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:51.213 22:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.213 22:06:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:06:51.213 { 00:06:51.213 "aliases": [ 00:06:51.213 "52dceda7-6334-4def-be3d-34110cc6590d" 00:06:51.213 ], 00:06:51.213 "assigned_rate_limits": { 00:06:51.213 "r_mbytes_per_sec": 0, 00:06:51.213 "rw_ios_per_sec": 0, 00:06:51.213 "rw_mbytes_per_sec": 0, 00:06:51.213 "w_mbytes_per_sec": 0 00:06:51.213 }, 00:06:51.213 "block_size": 512, 00:06:51.213 "claim_type": "exclusive_write", 00:06:51.213 "claimed": true, 00:06:51.213 "driver_specific": {}, 00:06:51.213 "memory_domains": [ 00:06:51.213 { 00:06:51.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.213 "dma_device_type": 2 00:06:51.213 } 00:06:51.213 ], 00:06:51.213 "name": "Malloc1", 00:06:51.213 "num_blocks": 1048576, 00:06:51.213 "product_name": "Malloc disk", 00:06:51.214 "supported_io_types": { 00:06:51.214 "abort": true, 00:06:51.214 "compare": false, 00:06:51.214 "compare_and_write": false, 00:06:51.214 "flush": true, 00:06:51.214 "nvme_admin": false, 00:06:51.214 "nvme_io": false, 00:06:51.214 "read": true, 00:06:51.214 "reset": true, 00:06:51.214 "unmap": true, 00:06:51.214 "write": true, 00:06:51.214 "write_zeroes": true 00:06:51.214 }, 00:06:51.214 "uuid": "52dceda7-6334-4def-be3d-34110cc6590d", 00:06:51.214 "zoned": false 00:06:51.214 } 00:06:51.214 ]' 00:06:51.214 22:06:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:06:51.214 22:06:47 -- common/autotest_common.sh@1372 -- # bs=512 00:06:51.214 22:06:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:06:51.214 22:06:47 -- common/autotest_common.sh@1373 -- # nb=1048576 00:06:51.214 22:06:47 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:06:51.214 22:06:47 -- common/autotest_common.sh@1377 -- # echo 512 00:06:51.214 22:06:47 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:51.472 22:06:47 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:51.472 22:06:47 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:51.472 22:06:47 -- common/autotest_common.sh@1187 -- # local i=0 00:06:51.472 22:06:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:06:51.472 22:06:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:06:51.472 22:06:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:06:54.010 22:06:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:06:54.010 22:06:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:06:54.010 22:06:49 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:06:54.010 22:06:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:06:54.010 22:06:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:06:54.010 22:06:50 -- common/autotest_common.sh@1197 -- # return 0 00:06:54.010 22:06:50 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:54.010 22:06:50 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:54.010 22:06:50 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:54.010 22:06:50 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:54.010 22:06:50 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:54.010 22:06:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:54.010 22:06:50 -- setup/common.sh@80 -- # echo 536870912 00:06:54.010 22:06:50 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:54.010 22:06:50 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:54.010 22:06:50 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:54.010 22:06:50 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:54.010 22:06:50 -- target/filesystem.sh@69 -- # partprobe 00:06:54.010 22:06:50 -- target/filesystem.sh@70 -- # sleep 1 00:06:54.578 22:06:51 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:54.578 22:06:51 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:54.578 22:06:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:54.578 22:06:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.578 22:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:54.837 ************************************ 00:06:54.837 START TEST filesystem_ext4 00:06:54.837 ************************************ 00:06:54.837 22:06:51 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:54.837 22:06:51 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:54.838 22:06:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:54.838 22:06:51 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:54.838 22:06:51 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:54.838 22:06:51 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:54.838 22:06:51 -- common/autotest_common.sh@914 -- # local i=0 00:06:54.838 22:06:51 -- common/autotest_common.sh@915 -- # local force 00:06:54.838 22:06:51 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:54.838 22:06:51 -- common/autotest_common.sh@918 -- # force=-F 00:06:54.838 22:06:51 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:54.838 mke2fs 1.47.0 (5-Feb-2023) 00:06:54.838 Discarding device blocks: 0/522240 done 00:06:54.838 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:54.838 Filesystem UUID: f065f34a-9aa3-4c03-a87d-3b7fb57c08de 00:06:54.838 Superblock backups stored on blocks: 00:06:54.838 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:54.838 00:06:54.838 Allocating group tables: 0/64 done 00:06:54.838 Writing inode tables: 0/64 done 00:06:54.838 Creating journal (8192 blocks): done 00:06:54.838 Writing superblocks and filesystem accounting information: 0/64 done 00:06:54.838 00:06:54.838 22:06:51 -- common/autotest_common.sh@931 -- # return 0 00:06:54.838 22:06:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.158 22:06:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.158 22:06:56 -- target/filesystem.sh@25 -- # sync 00:07:00.158 22:06:56 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.158 22:06:56 -- target/filesystem.sh@27 -- # sync 00:07:00.158 22:06:56 -- target/filesystem.sh@29 -- # i=0 00:07:00.158 22:06:56 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.418 22:06:56 -- target/filesystem.sh@37 -- # kill -0 60579 00:07:00.418 22:06:56 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.418 22:06:56 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.418 22:06:56 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.418 22:06:56 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.418 00:07:00.418 real 0m5.597s 00:07:00.418 user 0m0.028s 00:07:00.418 sys 0m0.065s 00:07:00.418 22:06:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.418 22:06:56 -- common/autotest_common.sh@10 -- # set +x 00:07:00.418 ************************************ 00:07:00.418 END TEST filesystem_ext4 00:07:00.418 ************************************ 00:07:00.418 22:06:56 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:00.418 22:06:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:00.418 22:06:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.418 22:06:56 -- common/autotest_common.sh@10 -- # set +x 00:07:00.418 ************************************ 00:07:00.418 START TEST filesystem_btrfs 00:07:00.418 ************************************ 00:07:00.418 22:06:56 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:00.418 22:06:56 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:00.418 22:06:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.418 22:06:56 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:00.418 22:06:56 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:00.418 22:06:56 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:00.418 22:06:56 -- common/autotest_common.sh@914 -- # local i=0 00:07:00.418 22:06:56 -- common/autotest_common.sh@915 -- # local force 00:07:00.418 22:06:56 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:00.418 22:06:56 -- common/autotest_common.sh@920 -- # force=-f 00:07:00.418 22:06:56 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:00.677 btrfs-progs v6.8.1 00:07:00.677 See https://btrfs.readthedocs.io for more information. 00:07:00.677 00:07:00.677 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:00.677 NOTE: several default settings have changed in version 5.15, please make sure 00:07:00.677 this does not affect your deployments: 00:07:00.677 - DUP for metadata (-m dup) 00:07:00.677 - enabled no-holes (-O no-holes) 00:07:00.677 - enabled free-space-tree (-R free-space-tree) 00:07:00.677 00:07:00.677 Label: (null) 00:07:00.677 UUID: a8153130-4c0c-47a8-96c6-9d429703e9a3 00:07:00.677 Node size: 16384 00:07:00.677 Sector size: 4096 (CPU page size: 4096) 00:07:00.677 Filesystem size: 510.00MiB 00:07:00.677 Block group profiles: 00:07:00.677 Data: single 8.00MiB 00:07:00.677 Metadata: DUP 32.00MiB 00:07:00.677 System: DUP 8.00MiB 00:07:00.677 SSD detected: yes 00:07:00.677 Zoned device: no 00:07:00.677 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:00.677 Checksum: crc32c 00:07:00.677 Number of devices: 1 00:07:00.677 Devices: 00:07:00.677 ID SIZE PATH 00:07:00.677 1 510.00MiB /dev/nvme0n1p1 00:07:00.677 00:07:00.677 22:06:57 -- common/autotest_common.sh@931 -- # return 0 00:07:00.677 22:06:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.677 22:06:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.677 22:06:57 -- target/filesystem.sh@25 -- # sync 00:07:00.677 22:06:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.677 22:06:57 -- target/filesystem.sh@27 -- # sync 00:07:00.677 22:06:57 -- target/filesystem.sh@29 -- # i=0 00:07:00.678 22:06:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.678 22:06:57 -- target/filesystem.sh@37 -- # kill -0 60579 00:07:00.678 22:06:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.678 22:06:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.678 22:06:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.678 22:06:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.678 ************************************ 00:07:00.678 END TEST filesystem_btrfs 00:07:00.678 ************************************ 00:07:00.678 00:07:00.678 real 0m0.317s 00:07:00.678 user 0m0.017s 00:07:00.678 sys 0m0.067s 00:07:00.678 22:06:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.678 22:06:57 -- common/autotest_common.sh@10 -- # set +x 00:07:00.678 22:06:57 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:00.678 22:06:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:00.678 22:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.678 22:06:57 -- common/autotest_common.sh@10 -- # set +x 00:07:00.678 ************************************ 00:07:00.678 START TEST filesystem_xfs 00:07:00.678 ************************************ 00:07:00.678 22:06:57 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:00.678 22:06:57 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:00.678 22:06:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.678 22:06:57 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:00.678 22:06:57 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:00.678 22:06:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:00.678 22:06:57 -- common/autotest_common.sh@914 -- # local i=0 00:07:00.678 22:06:57 -- common/autotest_common.sh@915 -- # local force 00:07:00.678 22:06:57 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:00.678 22:06:57 -- common/autotest_common.sh@920 -- # force=-f 00:07:00.678 22:06:57 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:00.937 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:00.937 = sectsz=512 attr=2, projid32bit=1 00:07:00.937 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:00.937 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:00.937 data = bsize=4096 blocks=130560, imaxpct=25 00:07:00.937 = sunit=0 swidth=0 blks 00:07:00.937 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:00.937 log =internal log bsize=4096 blocks=16384, version=2 00:07:00.937 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:00.937 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:01.504 Discarding blocks...Done. 00:07:01.504 22:06:58 -- common/autotest_common.sh@931 -- # return 0 00:07:01.504 22:06:58 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:04.037 22:07:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:04.037 22:07:00 -- target/filesystem.sh@25 -- # sync 00:07:04.037 22:07:00 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:04.037 22:07:00 -- target/filesystem.sh@27 -- # sync 00:07:04.037 22:07:00 -- target/filesystem.sh@29 -- # i=0 00:07:04.037 22:07:00 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:04.037 22:07:00 -- target/filesystem.sh@37 -- # kill -0 60579 00:07:04.037 22:07:00 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:04.037 22:07:00 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:04.037 22:07:00 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:04.037 22:07:00 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:04.037 ************************************ 00:07:04.037 END TEST filesystem_xfs 00:07:04.037 ************************************ 00:07:04.037 00:07:04.037 real 0m3.240s 00:07:04.037 user 0m0.030s 00:07:04.037 sys 0m0.057s 00:07:04.037 22:07:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.037 22:07:00 -- common/autotest_common.sh@10 -- # set +x 00:07:04.037 22:07:00 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:04.037 22:07:00 -- target/filesystem.sh@93 -- # sync 00:07:04.037 22:07:00 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:04.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.297 22:07:00 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:04.297 22:07:00 -- common/autotest_common.sh@1208 -- # local i=0 00:07:04.297 22:07:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:04.297 22:07:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.297 22:07:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:04.297 22:07:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.297 22:07:00 -- common/autotest_common.sh@1220 -- # return 0 00:07:04.297 22:07:00 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:04.297 22:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.297 22:07:00 -- common/autotest_common.sh@10 -- # set +x 00:07:04.297 22:07:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.297 22:07:00 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:04.297 22:07:00 -- target/filesystem.sh@101 -- # killprocess 60579 00:07:04.297 22:07:00 -- common/autotest_common.sh@936 -- # '[' -z 60579 ']' 00:07:04.297 22:07:00 -- common/autotest_common.sh@940 -- # kill -0 60579 00:07:04.297 22:07:00 -- common/autotest_common.sh@941 -- # uname 00:07:04.297 22:07:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.297 22:07:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60579 00:07:04.297 killing process with pid 60579 00:07:04.297 22:07:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.297 22:07:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.297 22:07:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60579' 00:07:04.297 22:07:00 -- common/autotest_common.sh@955 -- # kill 60579 00:07:04.297 22:07:00 -- common/autotest_common.sh@960 -- # wait 60579 00:07:04.866 22:07:01 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:04.866 00:07:04.866 real 0m15.139s 00:07:04.866 user 0m58.614s 00:07:04.866 sys 0m1.627s 00:07:04.866 22:07:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.866 ************************************ 00:07:04.866 END TEST nvmf_filesystem_no_in_capsule 00:07:04.866 ************************************ 00:07:04.866 22:07:01 -- common/autotest_common.sh@10 -- # set +x 00:07:05.125 22:07:01 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:05.125 22:07:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.125 22:07:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.125 22:07:01 -- common/autotest_common.sh@10 -- # set +x 00:07:05.125 ************************************ 00:07:05.125 START TEST nvmf_filesystem_in_capsule 00:07:05.125 ************************************ 00:07:05.125 22:07:01 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:07:05.125 22:07:01 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:05.125 22:07:01 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:05.125 22:07:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:05.125 22:07:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.125 22:07:01 -- common/autotest_common.sh@10 -- # set +x 00:07:05.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.125 22:07:01 -- nvmf/common.sh@469 -- # nvmfpid=60962 00:07:05.125 22:07:01 -- nvmf/common.sh@470 -- # waitforlisten 60962 00:07:05.125 22:07:01 -- common/autotest_common.sh@829 -- # '[' -z 60962 ']' 00:07:05.125 22:07:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.125 22:07:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.125 22:07:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:05.125 22:07:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.125 22:07:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.125 22:07:01 -- common/autotest_common.sh@10 -- # set +x 00:07:05.125 [2024-11-17 22:07:01.550642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.125 [2024-11-17 22:07:01.550714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.125 [2024-11-17 22:07:01.684993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.384 [2024-11-17 22:07:01.777588] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.384 [2024-11-17 22:07:01.777768] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.384 [2024-11-17 22:07:01.777782] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.384 [2024-11-17 22:07:01.777790] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.384 [2024-11-17 22:07:01.777938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.384 [2024-11-17 22:07:01.778027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.384 [2024-11-17 22:07:01.778192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.384 [2024-11-17 22:07:01.778192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.952 22:07:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.952 22:07:02 -- common/autotest_common.sh@862 -- # return 0 00:07:05.952 22:07:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:05.952 22:07:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:05.952 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:05.952 22:07:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.952 22:07:02 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:05.952 22:07:02 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:05.952 22:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.952 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:05.952 [2024-11-17 22:07:02.547438] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.211 22:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.211 22:07:02 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:06.211 22:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.211 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:06.211 Malloc1 00:07:06.211 22:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.211 22:07:02 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:06.211 22:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.211 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:06.211 22:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.211 22:07:02 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:06.211 22:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.211 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:06.211 22:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.211 22:07:02 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.211 22:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.211 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:06.211 [2024-11-17 22:07:02.801491] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.211 22:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.211 22:07:02 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:06.211 22:07:02 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:06.211 22:07:02 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:06.211 22:07:02 -- common/autotest_common.sh@1369 -- # local bs 00:07:06.211 22:07:02 -- common/autotest_common.sh@1370 -- # local nb 00:07:06.211 22:07:02 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:06.211 22:07:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.211 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:06.471 22:07:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.471 22:07:02 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:06.471 { 00:07:06.471 "aliases": [ 00:07:06.471 "bd0abbd5-4476-4110-9e57-3685553c566c" 00:07:06.471 ], 00:07:06.471 "assigned_rate_limits": { 00:07:06.471 "r_mbytes_per_sec": 0, 00:07:06.471 "rw_ios_per_sec": 0, 00:07:06.471 "rw_mbytes_per_sec": 0, 00:07:06.471 "w_mbytes_per_sec": 0 00:07:06.471 }, 00:07:06.471 "block_size": 512, 00:07:06.471 "claim_type": "exclusive_write", 00:07:06.471 "claimed": true, 00:07:06.471 "driver_specific": {}, 00:07:06.471 "memory_domains": [ 00:07:06.471 { 00:07:06.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.471 "dma_device_type": 2 00:07:06.471 } 00:07:06.471 ], 00:07:06.471 "name": "Malloc1", 00:07:06.471 "num_blocks": 1048576, 00:07:06.471 "product_name": "Malloc disk", 00:07:06.471 "supported_io_types": { 00:07:06.471 "abort": true, 00:07:06.471 "compare": false, 00:07:06.471 "compare_and_write": false, 00:07:06.471 "flush": true, 00:07:06.471 "nvme_admin": false, 00:07:06.471 "nvme_io": false, 00:07:06.471 "read": true, 00:07:06.471 "reset": true, 00:07:06.471 "unmap": true, 00:07:06.471 "write": true, 00:07:06.471 "write_zeroes": true 00:07:06.471 }, 00:07:06.471 "uuid": "bd0abbd5-4476-4110-9e57-3685553c566c", 00:07:06.471 "zoned": false 00:07:06.471 } 00:07:06.471 ]' 00:07:06.471 22:07:02 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:06.471 22:07:02 -- common/autotest_common.sh@1372 -- # bs=512 00:07:06.471 22:07:02 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:06.471 22:07:02 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:06.471 22:07:02 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:06.471 22:07:02 -- common/autotest_common.sh@1377 -- # echo 512 00:07:06.471 22:07:02 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:06.471 22:07:02 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:06.729 22:07:03 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:06.729 22:07:03 -- common/autotest_common.sh@1187 -- # local i=0 00:07:06.729 22:07:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:06.729 22:07:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:06.729 22:07:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:08.649 22:07:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:08.649 22:07:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:08.649 22:07:05 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:08.649 22:07:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:08.649 22:07:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:08.649 22:07:05 -- common/autotest_common.sh@1197 -- # return 0 00:07:08.649 22:07:05 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:08.649 22:07:05 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:08.649 22:07:05 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:08.649 22:07:05 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:08.649 22:07:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:08.649 22:07:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:08.649 22:07:05 -- setup/common.sh@80 -- # echo 536870912 00:07:08.649 22:07:05 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:08.649 22:07:05 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:08.649 22:07:05 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:08.649 22:07:05 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:08.649 22:07:05 -- target/filesystem.sh@69 -- # partprobe 00:07:08.907 22:07:05 -- target/filesystem.sh@70 -- # sleep 1 00:07:09.842 22:07:06 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:09.842 22:07:06 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:09.842 22:07:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:09.842 22:07:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.842 22:07:06 -- common/autotest_common.sh@10 -- # set +x 00:07:09.842 ************************************ 00:07:09.842 START TEST filesystem_in_capsule_ext4 00:07:09.842 ************************************ 00:07:09.842 22:07:06 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:09.842 22:07:06 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:09.842 22:07:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:09.842 22:07:06 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:09.842 22:07:06 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:09.842 22:07:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:09.842 22:07:06 -- common/autotest_common.sh@914 -- # local i=0 00:07:09.842 22:07:06 -- common/autotest_common.sh@915 -- # local force 00:07:09.842 22:07:06 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:09.842 22:07:06 -- common/autotest_common.sh@918 -- # force=-F 00:07:09.842 22:07:06 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:09.842 mke2fs 1.47.0 (5-Feb-2023) 00:07:10.101 Discarding device blocks: 0/522240 done 00:07:10.101 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:10.101 Filesystem UUID: 3eb655c0-7045-49dd-a060-d50c217b5b65 00:07:10.101 Superblock backups stored on blocks: 00:07:10.101 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:10.101 00:07:10.101 Allocating group tables: 0/64 done 00:07:10.101 Writing inode tables: 0/64 done 00:07:10.101 Creating journal (8192 blocks): done 00:07:10.101 Writing superblocks and filesystem accounting information: 0/64 done 00:07:10.101 00:07:10.101 22:07:06 -- common/autotest_common.sh@931 -- # return 0 00:07:10.101 22:07:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:15.412 22:07:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:15.412 22:07:11 -- target/filesystem.sh@25 -- # sync 00:07:15.412 22:07:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:15.412 22:07:11 -- target/filesystem.sh@27 -- # sync 00:07:15.412 22:07:11 -- target/filesystem.sh@29 -- # i=0 00:07:15.412 22:07:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:15.412 22:07:11 -- target/filesystem.sh@37 -- # kill -0 60962 00:07:15.412 22:07:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:15.412 22:07:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:15.412 22:07:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:15.412 22:07:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:15.412 00:07:15.412 real 0m5.647s 00:07:15.412 user 0m0.022s 00:07:15.412 sys 0m0.061s 00:07:15.412 22:07:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.412 22:07:11 -- common/autotest_common.sh@10 -- # set +x 00:07:15.412 ************************************ 00:07:15.412 END TEST filesystem_in_capsule_ext4 00:07:15.412 ************************************ 00:07:15.412 22:07:11 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:15.412 22:07:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:15.412 22:07:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.412 22:07:11 -- common/autotest_common.sh@10 -- # set +x 00:07:15.412 ************************************ 00:07:15.412 START TEST filesystem_in_capsule_btrfs 00:07:15.412 ************************************ 00:07:15.412 22:07:11 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:15.412 22:07:11 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:15.412 22:07:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:15.412 22:07:11 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:15.412 22:07:11 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:15.413 22:07:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:15.413 22:07:11 -- common/autotest_common.sh@914 -- # local i=0 00:07:15.413 22:07:11 -- common/autotest_common.sh@915 -- # local force 00:07:15.413 22:07:11 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:15.413 22:07:11 -- common/autotest_common.sh@920 -- # force=-f 00:07:15.413 22:07:11 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:15.671 btrfs-progs v6.8.1 00:07:15.671 See https://btrfs.readthedocs.io for more information. 00:07:15.671 00:07:15.671 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:15.671 NOTE: several default settings have changed in version 5.15, please make sure 00:07:15.671 this does not affect your deployments: 00:07:15.671 - DUP for metadata (-m dup) 00:07:15.671 - enabled no-holes (-O no-holes) 00:07:15.671 - enabled free-space-tree (-R free-space-tree) 00:07:15.671 00:07:15.671 Label: (null) 00:07:15.671 UUID: 72a09312-6c71-4eef-8fc2-72be65f1e4d3 00:07:15.671 Node size: 16384 00:07:15.671 Sector size: 4096 (CPU page size: 4096) 00:07:15.671 Filesystem size: 510.00MiB 00:07:15.671 Block group profiles: 00:07:15.671 Data: single 8.00MiB 00:07:15.671 Metadata: DUP 32.00MiB 00:07:15.671 System: DUP 8.00MiB 00:07:15.671 SSD detected: yes 00:07:15.671 Zoned device: no 00:07:15.671 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:15.671 Checksum: crc32c 00:07:15.671 Number of devices: 1 00:07:15.671 Devices: 00:07:15.671 ID SIZE PATH 00:07:15.671 1 510.00MiB /dev/nvme0n1p1 00:07:15.671 00:07:15.671 22:07:12 -- common/autotest_common.sh@931 -- # return 0 00:07:15.671 22:07:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:15.671 22:07:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:15.671 22:07:12 -- target/filesystem.sh@25 -- # sync 00:07:15.671 22:07:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:15.671 22:07:12 -- target/filesystem.sh@27 -- # sync 00:07:15.671 22:07:12 -- target/filesystem.sh@29 -- # i=0 00:07:15.671 22:07:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:15.671 22:07:12 -- target/filesystem.sh@37 -- # kill -0 60962 00:07:15.671 22:07:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:15.671 22:07:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:15.671 22:07:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:15.671 22:07:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:15.671 00:07:15.671 real 0m0.276s 00:07:15.671 user 0m0.021s 00:07:15.671 sys 0m0.062s 00:07:15.671 22:07:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.671 22:07:12 -- common/autotest_common.sh@10 -- # set +x 00:07:15.671 ************************************ 00:07:15.671 END TEST filesystem_in_capsule_btrfs 00:07:15.671 ************************************ 00:07:15.931 22:07:12 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:15.931 22:07:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:15.931 22:07:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.931 22:07:12 -- common/autotest_common.sh@10 -- # set +x 00:07:15.931 ************************************ 00:07:15.931 START TEST filesystem_in_capsule_xfs 00:07:15.931 ************************************ 00:07:15.931 22:07:12 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:15.931 22:07:12 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:15.931 22:07:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:15.931 22:07:12 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:15.931 22:07:12 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:15.931 22:07:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:15.931 22:07:12 -- common/autotest_common.sh@914 -- # local i=0 00:07:15.931 22:07:12 -- common/autotest_common.sh@915 -- # local force 00:07:15.931 22:07:12 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:15.931 22:07:12 -- common/autotest_common.sh@920 -- # force=-f 00:07:15.931 22:07:12 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:15.931 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:15.931 = sectsz=512 attr=2, projid32bit=1 00:07:15.931 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:15.931 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:15.931 data = bsize=4096 blocks=130560, imaxpct=25 00:07:15.931 = sunit=0 swidth=0 blks 00:07:15.931 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:15.931 log =internal log bsize=4096 blocks=16384, version=2 00:07:15.931 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:15.931 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:16.868 Discarding blocks...Done. 00:07:16.868 22:07:13 -- common/autotest_common.sh@931 -- # return 0 00:07:16.868 22:07:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.772 22:07:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.772 22:07:14 -- target/filesystem.sh@25 -- # sync 00:07:18.772 22:07:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.772 22:07:14 -- target/filesystem.sh@27 -- # sync 00:07:18.772 22:07:14 -- target/filesystem.sh@29 -- # i=0 00:07:18.772 22:07:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.772 22:07:15 -- target/filesystem.sh@37 -- # kill -0 60962 00:07:18.772 22:07:15 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.772 22:07:15 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.772 22:07:15 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.772 22:07:15 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.772 00:07:18.772 real 0m2.726s 00:07:18.772 user 0m0.032s 00:07:18.772 sys 0m0.048s 00:07:18.772 22:07:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.772 22:07:15 -- common/autotest_common.sh@10 -- # set +x 00:07:18.772 ************************************ 00:07:18.772 END TEST filesystem_in_capsule_xfs 00:07:18.772 ************************************ 00:07:18.772 22:07:15 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:18.772 22:07:15 -- target/filesystem.sh@93 -- # sync 00:07:18.772 22:07:15 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:18.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.772 22:07:15 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:18.772 22:07:15 -- common/autotest_common.sh@1208 -- # local i=0 00:07:18.772 22:07:15 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:18.772 22:07:15 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.772 22:07:15 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:18.773 22:07:15 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.773 22:07:15 -- common/autotest_common.sh@1220 -- # return 0 00:07:18.773 22:07:15 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.773 22:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.773 22:07:15 -- common/autotest_common.sh@10 -- # set +x 00:07:18.773 22:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.773 22:07:15 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:18.773 22:07:15 -- target/filesystem.sh@101 -- # killprocess 60962 00:07:18.773 22:07:15 -- common/autotest_common.sh@936 -- # '[' -z 60962 ']' 00:07:18.773 22:07:15 -- common/autotest_common.sh@940 -- # kill -0 60962 00:07:18.773 22:07:15 -- common/autotest_common.sh@941 -- # uname 00:07:18.773 22:07:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.773 22:07:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60962 00:07:18.773 22:07:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:18.773 22:07:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:18.773 killing process with pid 60962 00:07:18.773 22:07:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60962' 00:07:18.773 22:07:15 -- common/autotest_common.sh@955 -- # kill 60962 00:07:18.773 22:07:15 -- common/autotest_common.sh@960 -- # wait 60962 00:07:19.341 22:07:15 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:19.341 00:07:19.341 real 0m14.360s 00:07:19.341 user 0m55.414s 00:07:19.341 sys 0m1.631s 00:07:19.341 22:07:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.341 22:07:15 -- common/autotest_common.sh@10 -- # set +x 00:07:19.341 ************************************ 00:07:19.341 END TEST nvmf_filesystem_in_capsule 00:07:19.341 ************************************ 00:07:19.341 22:07:15 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:19.341 22:07:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:19.341 22:07:15 -- nvmf/common.sh@116 -- # sync 00:07:19.341 22:07:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:19.341 22:07:15 -- nvmf/common.sh@119 -- # set +e 00:07:19.341 22:07:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:19.341 22:07:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:19.341 rmmod nvme_tcp 00:07:19.341 rmmod nvme_fabrics 00:07:19.599 rmmod nvme_keyring 00:07:19.599 22:07:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:19.599 22:07:15 -- nvmf/common.sh@123 -- # set -e 00:07:19.599 22:07:15 -- nvmf/common.sh@124 -- # return 0 00:07:19.599 22:07:15 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:19.599 22:07:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:19.599 22:07:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:19.599 22:07:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:19.599 22:07:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.599 22:07:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:19.599 22:07:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.599 22:07:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.599 22:07:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.599 22:07:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:19.599 00:07:19.599 real 0m30.470s 00:07:19.599 user 1m54.367s 00:07:19.599 sys 0m3.706s 00:07:19.599 22:07:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.599 22:07:16 -- common/autotest_common.sh@10 -- # set +x 00:07:19.599 ************************************ 00:07:19.600 END TEST nvmf_filesystem 00:07:19.600 ************************************ 00:07:19.600 22:07:16 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:19.600 22:07:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:19.600 22:07:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.600 22:07:16 -- common/autotest_common.sh@10 -- # set +x 00:07:19.600 ************************************ 00:07:19.600 START TEST nvmf_discovery 00:07:19.600 ************************************ 00:07:19.600 22:07:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:19.600 * Looking for test storage... 00:07:19.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.600 22:07:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:19.600 22:07:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:19.600 22:07:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:19.859 22:07:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:19.859 22:07:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:19.859 22:07:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:19.859 22:07:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:19.859 22:07:16 -- scripts/common.sh@335 -- # IFS=.-: 00:07:19.859 22:07:16 -- scripts/common.sh@335 -- # read -ra ver1 00:07:19.859 22:07:16 -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.859 22:07:16 -- scripts/common.sh@336 -- # read -ra ver2 00:07:19.859 22:07:16 -- scripts/common.sh@337 -- # local 'op=<' 00:07:19.859 22:07:16 -- scripts/common.sh@339 -- # ver1_l=2 00:07:19.859 22:07:16 -- scripts/common.sh@340 -- # ver2_l=1 00:07:19.859 22:07:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:19.859 22:07:16 -- scripts/common.sh@343 -- # case "$op" in 00:07:19.859 22:07:16 -- scripts/common.sh@344 -- # : 1 00:07:19.859 22:07:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:19.859 22:07:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.859 22:07:16 -- scripts/common.sh@364 -- # decimal 1 00:07:19.859 22:07:16 -- scripts/common.sh@352 -- # local d=1 00:07:19.859 22:07:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.859 22:07:16 -- scripts/common.sh@354 -- # echo 1 00:07:19.859 22:07:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:19.859 22:07:16 -- scripts/common.sh@365 -- # decimal 2 00:07:19.859 22:07:16 -- scripts/common.sh@352 -- # local d=2 00:07:19.859 22:07:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.859 22:07:16 -- scripts/common.sh@354 -- # echo 2 00:07:19.859 22:07:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:19.859 22:07:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:19.859 22:07:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:19.859 22:07:16 -- scripts/common.sh@367 -- # return 0 00:07:19.859 22:07:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.859 22:07:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:19.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.859 --rc genhtml_branch_coverage=1 00:07:19.859 --rc genhtml_function_coverage=1 00:07:19.859 --rc genhtml_legend=1 00:07:19.859 --rc geninfo_all_blocks=1 00:07:19.859 --rc geninfo_unexecuted_blocks=1 00:07:19.859 00:07:19.859 ' 00:07:19.859 22:07:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:19.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.859 --rc genhtml_branch_coverage=1 00:07:19.859 --rc genhtml_function_coverage=1 00:07:19.859 --rc genhtml_legend=1 00:07:19.859 --rc geninfo_all_blocks=1 00:07:19.859 --rc geninfo_unexecuted_blocks=1 00:07:19.859 00:07:19.859 ' 00:07:19.859 22:07:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:19.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.859 --rc genhtml_branch_coverage=1 00:07:19.859 --rc genhtml_function_coverage=1 00:07:19.859 --rc genhtml_legend=1 00:07:19.859 --rc geninfo_all_blocks=1 00:07:19.859 --rc geninfo_unexecuted_blocks=1 00:07:19.859 00:07:19.859 ' 00:07:19.859 22:07:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:19.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.859 --rc genhtml_branch_coverage=1 00:07:19.859 --rc genhtml_function_coverage=1 00:07:19.859 --rc genhtml_legend=1 00:07:19.859 --rc geninfo_all_blocks=1 00:07:19.859 --rc geninfo_unexecuted_blocks=1 00:07:19.859 00:07:19.859 ' 00:07:19.859 22:07:16 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:19.859 22:07:16 -- nvmf/common.sh@7 -- # uname -s 00:07:19.859 22:07:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.859 22:07:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.859 22:07:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.859 22:07:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.859 22:07:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.859 22:07:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.859 22:07:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.859 22:07:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.859 22:07:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.859 22:07:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.859 22:07:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:07:19.859 22:07:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:07:19.859 22:07:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.859 22:07:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.859 22:07:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:19.859 22:07:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.859 22:07:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.859 22:07:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.859 22:07:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.859 22:07:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.859 22:07:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.859 22:07:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.859 22:07:16 -- paths/export.sh@5 -- # export PATH 00:07:19.860 22:07:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.860 22:07:16 -- nvmf/common.sh@46 -- # : 0 00:07:19.860 22:07:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:19.860 22:07:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:19.860 22:07:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:19.860 22:07:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.860 22:07:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.860 22:07:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:19.860 22:07:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:19.860 22:07:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:19.860 22:07:16 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:19.860 22:07:16 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:19.860 22:07:16 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:19.860 22:07:16 -- target/discovery.sh@15 -- # hash nvme 00:07:19.860 22:07:16 -- target/discovery.sh@20 -- # nvmftestinit 00:07:19.860 22:07:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:19.860 22:07:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.860 22:07:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:19.860 22:07:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:19.860 22:07:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:19.860 22:07:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.860 22:07:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.860 22:07:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.860 22:07:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:19.860 22:07:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:19.860 22:07:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:19.860 22:07:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:19.860 22:07:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:19.860 22:07:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:19.860 22:07:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.860 22:07:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.860 22:07:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:19.860 22:07:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:19.860 22:07:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:19.860 22:07:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:19.860 22:07:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:19.860 22:07:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.860 22:07:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:19.860 22:07:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:19.860 22:07:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:19.860 22:07:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:19.860 22:07:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:19.860 22:07:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:19.860 Cannot find device "nvmf_tgt_br" 00:07:19.860 22:07:16 -- nvmf/common.sh@154 -- # true 00:07:19.860 22:07:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:19.860 Cannot find device "nvmf_tgt_br2" 00:07:19.860 22:07:16 -- nvmf/common.sh@155 -- # true 00:07:19.860 22:07:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:19.860 22:07:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:19.860 Cannot find device "nvmf_tgt_br" 00:07:19.860 22:07:16 -- nvmf/common.sh@157 -- # true 00:07:19.860 22:07:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:19.860 Cannot find device "nvmf_tgt_br2" 00:07:19.860 22:07:16 -- nvmf/common.sh@158 -- # true 00:07:19.860 22:07:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:19.860 22:07:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:19.860 22:07:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:19.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.860 22:07:16 -- nvmf/common.sh@161 -- # true 00:07:19.860 22:07:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:19.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.860 22:07:16 -- nvmf/common.sh@162 -- # true 00:07:19.860 22:07:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:19.860 22:07:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:19.860 22:07:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:19.860 22:07:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:19.860 22:07:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:19.860 22:07:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:20.119 22:07:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:20.119 22:07:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:20.119 22:07:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:20.119 22:07:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:20.119 22:07:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:20.119 22:07:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:20.119 22:07:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:20.119 22:07:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:20.119 22:07:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:20.119 22:07:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:20.119 22:07:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:20.119 22:07:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:20.119 22:07:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:20.119 22:07:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:20.119 22:07:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:20.119 22:07:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:20.119 22:07:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:20.119 22:07:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:20.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:07:20.119 00:07:20.119 --- 10.0.0.2 ping statistics --- 00:07:20.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.119 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:20.119 22:07:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:20.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:20.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:07:20.119 00:07:20.119 --- 10.0.0.3 ping statistics --- 00:07:20.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.119 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:20.119 22:07:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:20.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:20.119 00:07:20.119 --- 10.0.0.1 ping statistics --- 00:07:20.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.119 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:20.119 22:07:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.119 22:07:16 -- nvmf/common.sh@421 -- # return 0 00:07:20.119 22:07:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:20.119 22:07:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.119 22:07:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:20.119 22:07:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:20.119 22:07:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.119 22:07:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:20.119 22:07:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:20.119 22:07:16 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:20.119 22:07:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:20.119 22:07:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:20.119 22:07:16 -- common/autotest_common.sh@10 -- # set +x 00:07:20.119 22:07:16 -- nvmf/common.sh@469 -- # nvmfpid=61504 00:07:20.119 22:07:16 -- nvmf/common.sh@470 -- # waitforlisten 61504 00:07:20.119 22:07:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:20.119 22:07:16 -- common/autotest_common.sh@829 -- # '[' -z 61504 ']' 00:07:20.119 22:07:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.119 22:07:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.119 22:07:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.119 22:07:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.119 22:07:16 -- common/autotest_common.sh@10 -- # set +x 00:07:20.119 [2024-11-17 22:07:16.691503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.119 [2024-11-17 22:07:16.691564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.378 [2024-11-17 22:07:16.827842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.378 [2024-11-17 22:07:16.946460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:20.378 [2024-11-17 22:07:16.946929] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.378 [2024-11-17 22:07:16.947079] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.378 [2024-11-17 22:07:16.947231] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.378 [2024-11-17 22:07:16.947617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.378 [2024-11-17 22:07:16.947731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.378 [2024-11-17 22:07:16.947870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.378 [2024-11-17 22:07:16.947879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.314 22:07:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.314 22:07:17 -- common/autotest_common.sh@862 -- # return 0 00:07:21.314 22:07:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:21.314 22:07:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.314 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.314 22:07:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.314 22:07:17 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.314 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.314 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.314 [2024-11-17 22:07:17.784657] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.314 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.314 22:07:17 -- target/discovery.sh@26 -- # seq 1 4 00:07:21.315 22:07:17 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:21.315 22:07:17 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 Null1 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 [2024-11-17 22:07:17.848880] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:21.315 22:07:17 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 Null2 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:21.315 22:07:17 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 Null3 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:21.315 22:07:17 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 Null4 00:07:21.315 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.315 22:07:17 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:21.315 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.315 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.574 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.574 22:07:17 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:21.574 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.574 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.574 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.574 22:07:17 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:21.574 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.574 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.574 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.574 22:07:17 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.574 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.575 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.575 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.575 22:07:17 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:21.575 22:07:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.575 22:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:21.575 22:07:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.575 22:07:17 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 4420 00:07:21.575 00:07:21.575 Discovery Log Number of Records 6, Generation counter 6 00:07:21.575 =====Discovery Log Entry 0====== 00:07:21.575 trtype: tcp 00:07:21.575 adrfam: ipv4 00:07:21.575 subtype: current discovery subsystem 00:07:21.575 treq: not required 00:07:21.575 portid: 0 00:07:21.575 trsvcid: 4420 00:07:21.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:21.575 traddr: 10.0.0.2 00:07:21.575 eflags: explicit discovery connections, duplicate discovery information 00:07:21.575 sectype: none 00:07:21.575 =====Discovery Log Entry 1====== 00:07:21.575 trtype: tcp 00:07:21.575 adrfam: ipv4 00:07:21.575 subtype: nvme subsystem 00:07:21.575 treq: not required 00:07:21.575 portid: 0 00:07:21.575 trsvcid: 4420 00:07:21.575 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:21.575 traddr: 10.0.0.2 00:07:21.575 eflags: none 00:07:21.575 sectype: none 00:07:21.575 =====Discovery Log Entry 2====== 00:07:21.575 trtype: tcp 00:07:21.575 adrfam: ipv4 00:07:21.575 subtype: nvme subsystem 00:07:21.575 treq: not required 00:07:21.575 portid: 0 00:07:21.575 trsvcid: 4420 00:07:21.575 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:21.575 traddr: 10.0.0.2 00:07:21.575 eflags: none 00:07:21.575 sectype: none 00:07:21.575 =====Discovery Log Entry 3====== 00:07:21.575 trtype: tcp 00:07:21.575 adrfam: ipv4 00:07:21.575 subtype: nvme subsystem 00:07:21.575 treq: not required 00:07:21.575 portid: 0 00:07:21.575 trsvcid: 4420 00:07:21.575 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:21.575 traddr: 10.0.0.2 00:07:21.575 eflags: none 00:07:21.575 sectype: none 00:07:21.575 =====Discovery Log Entry 4====== 00:07:21.575 trtype: tcp 00:07:21.575 adrfam: ipv4 00:07:21.575 subtype: nvme subsystem 00:07:21.575 treq: not required 00:07:21.575 portid: 0 00:07:21.575 trsvcid: 4420 00:07:21.575 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:21.575 traddr: 10.0.0.2 00:07:21.575 eflags: none 00:07:21.575 sectype: none 00:07:21.575 =====Discovery Log Entry 5====== 00:07:21.575 trtype: tcp 00:07:21.575 adrfam: ipv4 00:07:21.575 subtype: discovery subsystem referral 00:07:21.575 treq: not required 00:07:21.575 portid: 0 00:07:21.575 trsvcid: 4430 00:07:21.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:21.575 traddr: 10.0.0.2 00:07:21.575 eflags: none 00:07:21.575 sectype: none 00:07:21.575 Perform nvmf subsystem discovery via RPC 00:07:21.575 22:07:18 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:21.575 22:07:18 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:21.575 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.575 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.575 [2024-11-17 22:07:18.089636] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:21.575 [ 00:07:21.575 { 00:07:21.575 "allow_any_host": true, 00:07:21.575 "hosts": [], 00:07:21.575 "listen_addresses": [ 00:07:21.575 { 00:07:21.575 "adrfam": "IPv4", 00:07:21.575 "traddr": "10.0.0.2", 00:07:21.575 "transport": "TCP", 00:07:21.575 "trsvcid": "4420", 00:07:21.575 "trtype": "TCP" 00:07:21.575 } 00:07:21.575 ], 00:07:21.575 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:21.575 "subtype": "Discovery" 00:07:21.575 }, 00:07:21.575 { 00:07:21.575 "allow_any_host": true, 00:07:21.575 "hosts": [], 00:07:21.575 "listen_addresses": [ 00:07:21.575 { 00:07:21.575 "adrfam": "IPv4", 00:07:21.575 "traddr": "10.0.0.2", 00:07:21.575 "transport": "TCP", 00:07:21.575 "trsvcid": "4420", 00:07:21.575 "trtype": "TCP" 00:07:21.575 } 00:07:21.575 ], 00:07:21.575 "max_cntlid": 65519, 00:07:21.575 "max_namespaces": 32, 00:07:21.575 "min_cntlid": 1, 00:07:21.575 "model_number": "SPDK bdev Controller", 00:07:21.575 "namespaces": [ 00:07:21.575 { 00:07:21.575 "bdev_name": "Null1", 00:07:21.575 "name": "Null1", 00:07:21.575 "nguid": "1F2555B507EE4F7BAA59E39AD1D86CFE", 00:07:21.575 "nsid": 1, 00:07:21.575 "uuid": "1f2555b5-07ee-4f7b-aa59-e39ad1d86cfe" 00:07:21.575 } 00:07:21.575 ], 00:07:21.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:21.575 "serial_number": "SPDK00000000000001", 00:07:21.575 "subtype": "NVMe" 00:07:21.575 }, 00:07:21.575 { 00:07:21.575 "allow_any_host": true, 00:07:21.575 "hosts": [], 00:07:21.575 "listen_addresses": [ 00:07:21.575 { 00:07:21.575 "adrfam": "IPv4", 00:07:21.575 "traddr": "10.0.0.2", 00:07:21.575 "transport": "TCP", 00:07:21.575 "trsvcid": "4420", 00:07:21.575 "trtype": "TCP" 00:07:21.575 } 00:07:21.575 ], 00:07:21.575 "max_cntlid": 65519, 00:07:21.575 "max_namespaces": 32, 00:07:21.575 "min_cntlid": 1, 00:07:21.575 "model_number": "SPDK bdev Controller", 00:07:21.575 "namespaces": [ 00:07:21.575 { 00:07:21.575 "bdev_name": "Null2", 00:07:21.575 "name": "Null2", 00:07:21.575 "nguid": "018A875DFDAC4F70B0C2697E9BA1EB52", 00:07:21.575 "nsid": 1, 00:07:21.575 "uuid": "018a875d-fdac-4f70-b0c2-697e9ba1eb52" 00:07:21.575 } 00:07:21.575 ], 00:07:21.575 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:21.575 "serial_number": "SPDK00000000000002", 00:07:21.575 "subtype": "NVMe" 00:07:21.575 }, 00:07:21.575 { 00:07:21.575 "allow_any_host": true, 00:07:21.575 "hosts": [], 00:07:21.575 "listen_addresses": [ 00:07:21.575 { 00:07:21.575 "adrfam": "IPv4", 00:07:21.575 "traddr": "10.0.0.2", 00:07:21.575 "transport": "TCP", 00:07:21.575 "trsvcid": "4420", 00:07:21.575 "trtype": "TCP" 00:07:21.575 } 00:07:21.575 ], 00:07:21.575 "max_cntlid": 65519, 00:07:21.575 "max_namespaces": 32, 00:07:21.575 "min_cntlid": 1, 00:07:21.575 "model_number": "SPDK bdev Controller", 00:07:21.575 "namespaces": [ 00:07:21.575 { 00:07:21.575 "bdev_name": "Null3", 00:07:21.575 "name": "Null3", 00:07:21.575 "nguid": "4140FCDE35044A9B86EF5BF82E847A63", 00:07:21.575 "nsid": 1, 00:07:21.575 "uuid": "4140fcde-3504-4a9b-86ef-5bf82e847a63" 00:07:21.575 } 00:07:21.575 ], 00:07:21.575 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:21.575 "serial_number": "SPDK00000000000003", 00:07:21.575 "subtype": "NVMe" 00:07:21.575 }, 00:07:21.575 { 00:07:21.575 "allow_any_host": true, 00:07:21.575 "hosts": [], 00:07:21.575 "listen_addresses": [ 00:07:21.575 { 00:07:21.575 "adrfam": "IPv4", 00:07:21.575 "traddr": "10.0.0.2", 00:07:21.575 "transport": "TCP", 00:07:21.575 "trsvcid": "4420", 00:07:21.575 "trtype": "TCP" 00:07:21.575 } 00:07:21.575 ], 00:07:21.575 "max_cntlid": 65519, 00:07:21.575 "max_namespaces": 32, 00:07:21.575 "min_cntlid": 1, 00:07:21.575 "model_number": "SPDK bdev Controller", 00:07:21.575 "namespaces": [ 00:07:21.575 { 00:07:21.575 "bdev_name": "Null4", 00:07:21.575 "name": "Null4", 00:07:21.575 "nguid": "C187D2A3A4384F23A69B2CD34ECE60AC", 00:07:21.575 "nsid": 1, 00:07:21.575 "uuid": "c187d2a3-a438-4f23-a69b-2cd34ece60ac" 00:07:21.575 } 00:07:21.575 ], 00:07:21.575 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:21.575 "serial_number": "SPDK00000000000004", 00:07:21.575 "subtype": "NVMe" 00:07:21.575 } 00:07:21.575 ] 00:07:21.575 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.575 22:07:18 -- target/discovery.sh@42 -- # seq 1 4 00:07:21.575 22:07:18 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:21.575 22:07:18 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.575 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.575 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.575 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.575 22:07:18 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:21.575 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.575 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.575 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.575 22:07:18 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:21.575 22:07:18 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:21.575 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.575 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.575 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.575 22:07:18 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:21.575 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.575 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.575 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.575 22:07:18 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:21.576 22:07:18 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:21.576 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.576 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.576 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.576 22:07:18 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:21.576 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.576 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.576 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.576 22:07:18 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:21.576 22:07:18 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:21.576 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.576 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.576 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.576 22:07:18 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:21.576 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.576 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.576 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.576 22:07:18 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:21.835 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.835 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.835 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.835 22:07:18 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:21.835 22:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.835 22:07:18 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:21.835 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:21.835 22:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.835 22:07:18 -- target/discovery.sh@49 -- # check_bdevs= 00:07:21.835 22:07:18 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:21.835 22:07:18 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:21.835 22:07:18 -- target/discovery.sh@57 -- # nvmftestfini 00:07:21.835 22:07:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:21.835 22:07:18 -- nvmf/common.sh@116 -- # sync 00:07:21.835 22:07:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:21.835 22:07:18 -- nvmf/common.sh@119 -- # set +e 00:07:21.835 22:07:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:21.835 22:07:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:21.835 rmmod nvme_tcp 00:07:21.835 rmmod nvme_fabrics 00:07:21.835 rmmod nvme_keyring 00:07:21.835 22:07:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:21.835 22:07:18 -- nvmf/common.sh@123 -- # set -e 00:07:21.835 22:07:18 -- nvmf/common.sh@124 -- # return 0 00:07:21.835 22:07:18 -- nvmf/common.sh@477 -- # '[' -n 61504 ']' 00:07:21.835 22:07:18 -- nvmf/common.sh@478 -- # killprocess 61504 00:07:21.835 22:07:18 -- common/autotest_common.sh@936 -- # '[' -z 61504 ']' 00:07:21.835 22:07:18 -- common/autotest_common.sh@940 -- # kill -0 61504 00:07:21.835 22:07:18 -- common/autotest_common.sh@941 -- # uname 00:07:21.835 22:07:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:21.835 22:07:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61504 00:07:21.835 22:07:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:21.835 22:07:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:21.835 killing process with pid 61504 00:07:21.835 22:07:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61504' 00:07:21.835 22:07:18 -- common/autotest_common.sh@955 -- # kill 61504 00:07:21.836 [2024-11-17 22:07:18.369102] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:21.836 22:07:18 -- common/autotest_common.sh@960 -- # wait 61504 00:07:22.095 22:07:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:22.095 22:07:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:22.095 22:07:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:22.095 22:07:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:22.095 22:07:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:22.095 22:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.095 22:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.095 22:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.354 22:07:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:22.354 00:07:22.354 real 0m2.655s 00:07:22.354 user 0m7.284s 00:07:22.354 sys 0m0.649s 00:07:22.354 22:07:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.354 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:22.354 ************************************ 00:07:22.354 END TEST nvmf_discovery 00:07:22.354 ************************************ 00:07:22.354 22:07:18 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:22.354 22:07:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:22.354 22:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.354 22:07:18 -- common/autotest_common.sh@10 -- # set +x 00:07:22.354 ************************************ 00:07:22.354 START TEST nvmf_referrals 00:07:22.354 ************************************ 00:07:22.354 22:07:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:22.354 * Looking for test storage... 00:07:22.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:22.354 22:07:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:22.354 22:07:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:22.354 22:07:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.354 22:07:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.354 22:07:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.354 22:07:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.354 22:07:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.354 22:07:18 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.354 22:07:18 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.354 22:07:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.354 22:07:18 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.354 22:07:18 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.354 22:07:18 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.354 22:07:18 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.354 22:07:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.354 22:07:18 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.354 22:07:18 -- scripts/common.sh@344 -- # : 1 00:07:22.354 22:07:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.354 22:07:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.354 22:07:18 -- scripts/common.sh@364 -- # decimal 1 00:07:22.354 22:07:18 -- scripts/common.sh@352 -- # local d=1 00:07:22.354 22:07:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.354 22:07:18 -- scripts/common.sh@354 -- # echo 1 00:07:22.354 22:07:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.354 22:07:18 -- scripts/common.sh@365 -- # decimal 2 00:07:22.354 22:07:18 -- scripts/common.sh@352 -- # local d=2 00:07:22.354 22:07:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.354 22:07:18 -- scripts/common.sh@354 -- # echo 2 00:07:22.354 22:07:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.354 22:07:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.354 22:07:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.354 22:07:18 -- scripts/common.sh@367 -- # return 0 00:07:22.354 22:07:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.354 22:07:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.354 --rc genhtml_branch_coverage=1 00:07:22.354 --rc genhtml_function_coverage=1 00:07:22.354 --rc genhtml_legend=1 00:07:22.354 --rc geninfo_all_blocks=1 00:07:22.354 --rc geninfo_unexecuted_blocks=1 00:07:22.354 00:07:22.354 ' 00:07:22.354 22:07:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.354 --rc genhtml_branch_coverage=1 00:07:22.354 --rc genhtml_function_coverage=1 00:07:22.354 --rc genhtml_legend=1 00:07:22.354 --rc geninfo_all_blocks=1 00:07:22.354 --rc geninfo_unexecuted_blocks=1 00:07:22.354 00:07:22.354 ' 00:07:22.354 22:07:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.354 --rc genhtml_branch_coverage=1 00:07:22.354 --rc genhtml_function_coverage=1 00:07:22.354 --rc genhtml_legend=1 00:07:22.354 --rc geninfo_all_blocks=1 00:07:22.354 --rc geninfo_unexecuted_blocks=1 00:07:22.354 00:07:22.354 ' 00:07:22.354 22:07:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.354 --rc genhtml_branch_coverage=1 00:07:22.354 --rc genhtml_function_coverage=1 00:07:22.354 --rc genhtml_legend=1 00:07:22.354 --rc geninfo_all_blocks=1 00:07:22.354 --rc geninfo_unexecuted_blocks=1 00:07:22.354 00:07:22.354 ' 00:07:22.354 22:07:18 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:22.354 22:07:18 -- nvmf/common.sh@7 -- # uname -s 00:07:22.354 22:07:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.354 22:07:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.354 22:07:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.354 22:07:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.354 22:07:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.354 22:07:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.354 22:07:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.354 22:07:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.354 22:07:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.354 22:07:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.354 22:07:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:07:22.354 22:07:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:07:22.354 22:07:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.354 22:07:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.354 22:07:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:22.354 22:07:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.354 22:07:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.354 22:07:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.354 22:07:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.354 22:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.354 22:07:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.354 22:07:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.354 22:07:18 -- paths/export.sh@5 -- # export PATH 00:07:22.354 22:07:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.354 22:07:18 -- nvmf/common.sh@46 -- # : 0 00:07:22.354 22:07:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:22.354 22:07:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:22.355 22:07:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:22.355 22:07:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.355 22:07:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.355 22:07:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:22.355 22:07:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:22.355 22:07:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:22.355 22:07:18 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:22.355 22:07:18 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:22.355 22:07:18 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:22.355 22:07:18 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:22.355 22:07:18 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:22.355 22:07:18 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:22.355 22:07:18 -- target/referrals.sh@37 -- # nvmftestinit 00:07:22.355 22:07:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:22.355 22:07:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.355 22:07:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:22.355 22:07:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:22.355 22:07:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:22.355 22:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.355 22:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.355 22:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.614 22:07:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:22.614 22:07:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:22.614 22:07:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:22.614 22:07:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:22.614 22:07:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:22.614 22:07:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:22.614 22:07:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.614 22:07:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.614 22:07:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:22.614 22:07:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:22.614 22:07:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:22.614 22:07:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:22.614 22:07:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:22.614 22:07:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.614 22:07:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:22.614 22:07:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:22.614 22:07:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:22.614 22:07:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:22.614 22:07:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:22.614 22:07:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:22.614 Cannot find device "nvmf_tgt_br" 00:07:22.614 22:07:18 -- nvmf/common.sh@154 -- # true 00:07:22.614 22:07:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:22.614 Cannot find device "nvmf_tgt_br2" 00:07:22.614 22:07:19 -- nvmf/common.sh@155 -- # true 00:07:22.614 22:07:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:22.614 22:07:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:22.614 Cannot find device "nvmf_tgt_br" 00:07:22.614 22:07:19 -- nvmf/common.sh@157 -- # true 00:07:22.614 22:07:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:22.614 Cannot find device "nvmf_tgt_br2" 00:07:22.614 22:07:19 -- nvmf/common.sh@158 -- # true 00:07:22.614 22:07:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:22.614 22:07:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:22.614 22:07:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:22.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.614 22:07:19 -- nvmf/common.sh@161 -- # true 00:07:22.614 22:07:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:22.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.614 22:07:19 -- nvmf/common.sh@162 -- # true 00:07:22.614 22:07:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:22.614 22:07:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:22.614 22:07:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:22.614 22:07:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:22.614 22:07:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:22.614 22:07:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:22.614 22:07:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:22.614 22:07:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:22.614 22:07:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:22.614 22:07:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:22.614 22:07:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:22.614 22:07:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:22.614 22:07:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:22.614 22:07:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:22.614 22:07:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:22.614 22:07:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:22.614 22:07:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:22.872 22:07:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:22.872 22:07:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:22.872 22:07:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:22.872 22:07:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:22.872 22:07:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:22.872 22:07:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:22.872 22:07:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:22.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:07:22.872 00:07:22.872 --- 10.0.0.2 ping statistics --- 00:07:22.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.872 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:22.872 22:07:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:22.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:22.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:07:22.872 00:07:22.872 --- 10.0.0.3 ping statistics --- 00:07:22.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.872 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:22.872 22:07:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:22.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:07:22.872 00:07:22.872 --- 10.0.0.1 ping statistics --- 00:07:22.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.872 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:22.872 22:07:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.872 22:07:19 -- nvmf/common.sh@421 -- # return 0 00:07:22.872 22:07:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:22.872 22:07:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.872 22:07:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:22.872 22:07:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:22.872 22:07:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.872 22:07:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:22.872 22:07:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:22.872 22:07:19 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:22.872 22:07:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:22.872 22:07:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.872 22:07:19 -- common/autotest_common.sh@10 -- # set +x 00:07:22.872 22:07:19 -- nvmf/common.sh@469 -- # nvmfpid=61743 00:07:22.872 22:07:19 -- nvmf/common.sh@470 -- # waitforlisten 61743 00:07:22.872 22:07:19 -- common/autotest_common.sh@829 -- # '[' -z 61743 ']' 00:07:22.872 22:07:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.872 22:07:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.872 22:07:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.872 22:07:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.872 22:07:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.872 22:07:19 -- common/autotest_common.sh@10 -- # set +x 00:07:22.872 [2024-11-17 22:07:19.363058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.872 [2024-11-17 22:07:19.363121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.131 [2024-11-17 22:07:19.497442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.131 [2024-11-17 22:07:19.605574] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.131 [2024-11-17 22:07:19.605771] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.131 [2024-11-17 22:07:19.605789] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.131 [2024-11-17 22:07:19.605800] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.131 [2024-11-17 22:07:19.605951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.131 [2024-11-17 22:07:19.606054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.131 [2024-11-17 22:07:19.606147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.131 [2024-11-17 22:07:19.606148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.068 22:07:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.068 22:07:20 -- common/autotest_common.sh@862 -- # return 0 00:07:24.068 22:07:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:24.068 22:07:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.068 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.068 22:07:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.068 22:07:20 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.068 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.068 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.068 [2024-11-17 22:07:20.436687] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.068 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.068 22:07:20 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:24.068 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.068 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.068 [2024-11-17 22:07:20.464889] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:24.068 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.068 22:07:20 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:24.068 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.068 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.068 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.068 22:07:20 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:24.068 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.068 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.068 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.068 22:07:20 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:24.068 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.068 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.068 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.068 22:07:20 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.068 22:07:20 -- target/referrals.sh@48 -- # jq length 00:07:24.068 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.068 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.068 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.068 22:07:20 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:24.068 22:07:20 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:24.068 22:07:20 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:24.068 22:07:20 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:24.068 22:07:20 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.068 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.068 22:07:20 -- target/referrals.sh@21 -- # sort 00:07:24.068 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.068 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.068 22:07:20 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:24.068 22:07:20 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:24.068 22:07:20 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:24.068 22:07:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:24.068 22:07:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:24.068 22:07:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.068 22:07:20 -- target/referrals.sh@26 -- # sort 00:07:24.068 22:07:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:24.327 22:07:20 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:24.327 22:07:20 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:24.327 22:07:20 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:24.327 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.327 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.327 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.327 22:07:20 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:24.327 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.327 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.327 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.327 22:07:20 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:24.327 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.327 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.327 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.327 22:07:20 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.327 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.327 22:07:20 -- target/referrals.sh@56 -- # jq length 00:07:24.327 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.327 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.327 22:07:20 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:24.327 22:07:20 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:24.327 22:07:20 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:24.327 22:07:20 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:24.327 22:07:20 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.327 22:07:20 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:24.327 22:07:20 -- target/referrals.sh@26 -- # sort 00:07:24.587 22:07:20 -- target/referrals.sh@26 -- # echo 00:07:24.587 22:07:20 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:24.587 22:07:20 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:24.587 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.587 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.587 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.587 22:07:20 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:24.587 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.587 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.587 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.587 22:07:20 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:24.587 22:07:20 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:24.587 22:07:20 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.587 22:07:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.587 22:07:20 -- common/autotest_common.sh@10 -- # set +x 00:07:24.587 22:07:20 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:24.587 22:07:20 -- target/referrals.sh@21 -- # sort 00:07:24.587 22:07:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.587 22:07:21 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:24.587 22:07:21 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:24.587 22:07:21 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:24.587 22:07:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:24.587 22:07:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:24.587 22:07:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.587 22:07:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:24.587 22:07:21 -- target/referrals.sh@26 -- # sort 00:07:24.587 22:07:21 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:24.587 22:07:21 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:24.587 22:07:21 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:24.587 22:07:21 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:24.587 22:07:21 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:24.587 22:07:21 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:24.587 22:07:21 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.846 22:07:21 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:24.846 22:07:21 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:24.846 22:07:21 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:24.846 22:07:21 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:24.846 22:07:21 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:24.846 22:07:21 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.846 22:07:21 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:24.846 22:07:21 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:24.846 22:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.846 22:07:21 -- common/autotest_common.sh@10 -- # set +x 00:07:24.846 22:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.846 22:07:21 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:24.846 22:07:21 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:24.846 22:07:21 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.846 22:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.846 22:07:21 -- common/autotest_common.sh@10 -- # set +x 00:07:24.846 22:07:21 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:24.846 22:07:21 -- target/referrals.sh@21 -- # sort 00:07:24.846 22:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.846 22:07:21 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:24.846 22:07:21 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:24.846 22:07:21 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:24.846 22:07:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:24.846 22:07:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:24.846 22:07:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.846 22:07:21 -- target/referrals.sh@26 -- # sort 00:07:24.846 22:07:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.105 22:07:21 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:25.105 22:07:21 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:25.105 22:07:21 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:25.105 22:07:21 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:25.105 22:07:21 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:25.105 22:07:21 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.105 22:07:21 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:25.105 22:07:21 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:25.105 22:07:21 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:25.105 22:07:21 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:25.105 22:07:21 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:25.105 22:07:21 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:25.105 22:07:21 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.364 22:07:21 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:25.364 22:07:21 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:25.364 22:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.364 22:07:21 -- common/autotest_common.sh@10 -- # set +x 00:07:25.364 22:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.364 22:07:21 -- target/referrals.sh@82 -- # jq length 00:07:25.364 22:07:21 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.364 22:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.364 22:07:21 -- common/autotest_common.sh@10 -- # set +x 00:07:25.364 22:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.364 22:07:21 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:25.364 22:07:21 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:25.364 22:07:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.364 22:07:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.364 22:07:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.364 22:07:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.364 22:07:21 -- target/referrals.sh@26 -- # sort 00:07:25.624 22:07:21 -- target/referrals.sh@26 -- # echo 00:07:25.624 22:07:21 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:25.624 22:07:21 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:25.624 22:07:21 -- target/referrals.sh@86 -- # nvmftestfini 00:07:25.624 22:07:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:25.624 22:07:21 -- nvmf/common.sh@116 -- # sync 00:07:25.624 22:07:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:25.624 22:07:22 -- nvmf/common.sh@119 -- # set +e 00:07:25.624 22:07:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:25.624 22:07:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:25.624 rmmod nvme_tcp 00:07:25.624 rmmod nvme_fabrics 00:07:25.624 rmmod nvme_keyring 00:07:25.624 22:07:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:25.624 22:07:22 -- nvmf/common.sh@123 -- # set -e 00:07:25.624 22:07:22 -- nvmf/common.sh@124 -- # return 0 00:07:25.624 22:07:22 -- nvmf/common.sh@477 -- # '[' -n 61743 ']' 00:07:25.624 22:07:22 -- nvmf/common.sh@478 -- # killprocess 61743 00:07:25.624 22:07:22 -- common/autotest_common.sh@936 -- # '[' -z 61743 ']' 00:07:25.624 22:07:22 -- common/autotest_common.sh@940 -- # kill -0 61743 00:07:25.624 22:07:22 -- common/autotest_common.sh@941 -- # uname 00:07:25.624 22:07:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:25.624 22:07:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61743 00:07:25.624 22:07:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:25.624 22:07:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:25.624 killing process with pid 61743 00:07:25.624 22:07:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61743' 00:07:25.624 22:07:22 -- common/autotest_common.sh@955 -- # kill 61743 00:07:25.624 22:07:22 -- common/autotest_common.sh@960 -- # wait 61743 00:07:25.883 22:07:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:25.883 22:07:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:25.883 22:07:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:25.883 22:07:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.883 22:07:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:25.883 22:07:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.883 22:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.883 22:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.883 22:07:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:25.883 00:07:25.883 real 0m3.697s 00:07:25.883 user 0m12.380s 00:07:25.883 sys 0m0.887s 00:07:25.883 22:07:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.883 22:07:22 -- common/autotest_common.sh@10 -- # set +x 00:07:25.883 ************************************ 00:07:25.883 END TEST nvmf_referrals 00:07:25.883 ************************************ 00:07:26.142 22:07:22 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:26.142 22:07:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:26.142 22:07:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.142 22:07:22 -- common/autotest_common.sh@10 -- # set +x 00:07:26.142 ************************************ 00:07:26.142 START TEST nvmf_connect_disconnect 00:07:26.142 ************************************ 00:07:26.142 22:07:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:26.142 * Looking for test storage... 00:07:26.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.142 22:07:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:26.142 22:07:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:26.142 22:07:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:26.142 22:07:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:26.142 22:07:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:26.142 22:07:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:26.142 22:07:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:26.142 22:07:22 -- scripts/common.sh@335 -- # IFS=.-: 00:07:26.142 22:07:22 -- scripts/common.sh@335 -- # read -ra ver1 00:07:26.142 22:07:22 -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.142 22:07:22 -- scripts/common.sh@336 -- # read -ra ver2 00:07:26.142 22:07:22 -- scripts/common.sh@337 -- # local 'op=<' 00:07:26.142 22:07:22 -- scripts/common.sh@339 -- # ver1_l=2 00:07:26.142 22:07:22 -- scripts/common.sh@340 -- # ver2_l=1 00:07:26.142 22:07:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:26.142 22:07:22 -- scripts/common.sh@343 -- # case "$op" in 00:07:26.142 22:07:22 -- scripts/common.sh@344 -- # : 1 00:07:26.142 22:07:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:26.142 22:07:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.142 22:07:22 -- scripts/common.sh@364 -- # decimal 1 00:07:26.142 22:07:22 -- scripts/common.sh@352 -- # local d=1 00:07:26.142 22:07:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.142 22:07:22 -- scripts/common.sh@354 -- # echo 1 00:07:26.142 22:07:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:26.142 22:07:22 -- scripts/common.sh@365 -- # decimal 2 00:07:26.142 22:07:22 -- scripts/common.sh@352 -- # local d=2 00:07:26.142 22:07:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.142 22:07:22 -- scripts/common.sh@354 -- # echo 2 00:07:26.142 22:07:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:26.142 22:07:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:26.142 22:07:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:26.142 22:07:22 -- scripts/common.sh@367 -- # return 0 00:07:26.142 22:07:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.142 22:07:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.142 --rc genhtml_branch_coverage=1 00:07:26.142 --rc genhtml_function_coverage=1 00:07:26.142 --rc genhtml_legend=1 00:07:26.142 --rc geninfo_all_blocks=1 00:07:26.142 --rc geninfo_unexecuted_blocks=1 00:07:26.142 00:07:26.142 ' 00:07:26.142 22:07:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.142 --rc genhtml_branch_coverage=1 00:07:26.142 --rc genhtml_function_coverage=1 00:07:26.142 --rc genhtml_legend=1 00:07:26.142 --rc geninfo_all_blocks=1 00:07:26.142 --rc geninfo_unexecuted_blocks=1 00:07:26.142 00:07:26.142 ' 00:07:26.142 22:07:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.142 --rc genhtml_branch_coverage=1 00:07:26.142 --rc genhtml_function_coverage=1 00:07:26.142 --rc genhtml_legend=1 00:07:26.142 --rc geninfo_all_blocks=1 00:07:26.142 --rc geninfo_unexecuted_blocks=1 00:07:26.142 00:07:26.142 ' 00:07:26.142 22:07:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:26.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.143 --rc genhtml_branch_coverage=1 00:07:26.143 --rc genhtml_function_coverage=1 00:07:26.143 --rc genhtml_legend=1 00:07:26.143 --rc geninfo_all_blocks=1 00:07:26.143 --rc geninfo_unexecuted_blocks=1 00:07:26.143 00:07:26.143 ' 00:07:26.143 22:07:22 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.143 22:07:22 -- nvmf/common.sh@7 -- # uname -s 00:07:26.143 22:07:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.143 22:07:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.143 22:07:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.143 22:07:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.143 22:07:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.143 22:07:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.143 22:07:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.143 22:07:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.143 22:07:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.143 22:07:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.143 22:07:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:07:26.143 22:07:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:07:26.143 22:07:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.143 22:07:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.143 22:07:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.143 22:07:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.143 22:07:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.143 22:07:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.143 22:07:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.143 22:07:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.143 22:07:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.143 22:07:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.143 22:07:22 -- paths/export.sh@5 -- # export PATH 00:07:26.143 22:07:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.143 22:07:22 -- nvmf/common.sh@46 -- # : 0 00:07:26.143 22:07:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:26.143 22:07:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:26.143 22:07:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:26.143 22:07:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.143 22:07:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.143 22:07:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:26.143 22:07:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:26.143 22:07:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:26.143 22:07:22 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.143 22:07:22 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.143 22:07:22 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:26.143 22:07:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:26.143 22:07:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.143 22:07:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:26.143 22:07:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:26.143 22:07:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:26.143 22:07:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.143 22:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.143 22:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.143 22:07:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:26.143 22:07:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:26.143 22:07:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:26.143 22:07:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:26.143 22:07:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:26.143 22:07:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:26.143 22:07:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.143 22:07:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.143 22:07:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:26.143 22:07:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:26.143 22:07:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:26.143 22:07:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:26.143 22:07:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:26.143 22:07:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.143 22:07:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:26.143 22:07:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:26.143 22:07:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:26.143 22:07:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:26.143 22:07:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:26.402 22:07:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:26.402 Cannot find device "nvmf_tgt_br" 00:07:26.402 22:07:22 -- nvmf/common.sh@154 -- # true 00:07:26.402 22:07:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:26.402 Cannot find device "nvmf_tgt_br2" 00:07:26.402 22:07:22 -- nvmf/common.sh@155 -- # true 00:07:26.402 22:07:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:26.402 22:07:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:26.402 Cannot find device "nvmf_tgt_br" 00:07:26.402 22:07:22 -- nvmf/common.sh@157 -- # true 00:07:26.402 22:07:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:26.402 Cannot find device "nvmf_tgt_br2" 00:07:26.402 22:07:22 -- nvmf/common.sh@158 -- # true 00:07:26.402 22:07:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:26.402 22:07:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:26.402 22:07:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.402 22:07:22 -- nvmf/common.sh@161 -- # true 00:07:26.402 22:07:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.402 22:07:22 -- nvmf/common.sh@162 -- # true 00:07:26.402 22:07:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:26.402 22:07:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:26.402 22:07:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:26.402 22:07:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:26.402 22:07:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:26.402 22:07:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:26.402 22:07:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:26.402 22:07:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:26.403 22:07:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:26.403 22:07:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:26.403 22:07:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:26.403 22:07:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:26.403 22:07:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:26.403 22:07:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:26.403 22:07:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:26.403 22:07:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:26.403 22:07:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:26.403 22:07:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:26.403 22:07:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.661 22:07:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.661 22:07:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.661 22:07:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:26.661 22:07:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:26.661 22:07:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:26.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:07:26.661 00:07:26.661 --- 10.0.0.2 ping statistics --- 00:07:26.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.661 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:26.661 22:07:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:26.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:26.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:07:26.661 00:07:26.661 --- 10.0.0.3 ping statistics --- 00:07:26.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.661 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:07:26.661 22:07:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:26.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:07:26.661 00:07:26.661 --- 10.0.0.1 ping statistics --- 00:07:26.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.661 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:07:26.662 22:07:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.662 22:07:23 -- nvmf/common.sh@421 -- # return 0 00:07:26.662 22:07:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:26.662 22:07:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.662 22:07:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:26.662 22:07:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:26.662 22:07:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.662 22:07:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:26.662 22:07:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:26.662 22:07:23 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:26.662 22:07:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:26.662 22:07:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.662 22:07:23 -- common/autotest_common.sh@10 -- # set +x 00:07:26.662 22:07:23 -- nvmf/common.sh@469 -- # nvmfpid=62061 00:07:26.662 22:07:23 -- nvmf/common.sh@470 -- # waitforlisten 62061 00:07:26.662 22:07:23 -- common/autotest_common.sh@829 -- # '[' -z 62061 ']' 00:07:26.662 22:07:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.662 22:07:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.662 22:07:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.662 22:07:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.662 22:07:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.662 22:07:23 -- common/autotest_common.sh@10 -- # set +x 00:07:26.662 [2024-11-17 22:07:23.156444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.662 [2024-11-17 22:07:23.156539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.920 [2024-11-17 22:07:23.291396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.920 [2024-11-17 22:07:23.384840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:26.920 [2024-11-17 22:07:23.384986] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.920 [2024-11-17 22:07:23.384998] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.920 [2024-11-17 22:07:23.385006] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.920 [2024-11-17 22:07:23.385150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.920 [2024-11-17 22:07:23.385272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.920 [2024-11-17 22:07:23.385381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.920 [2024-11-17 22:07:23.385387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.856 22:07:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.856 22:07:24 -- common/autotest_common.sh@862 -- # return 0 00:07:27.856 22:07:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:27.856 22:07:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.856 22:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.856 22:07:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:27.856 22:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.856 22:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.856 [2024-11-17 22:07:24.272800] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.856 22:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:27.856 22:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.856 22:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.856 22:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:27.856 22:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.856 22:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.856 22:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.856 22:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.856 22:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.856 22:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.856 22:07:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.856 22:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:27.856 [2024-11-17 22:07:24.343187] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.856 22:07:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:27.856 22:07:24 -- target/connect_disconnect.sh@34 -- # set +x 00:07:30.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.235 22:11:09 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:13.235 22:11:09 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:13.235 22:11:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:13.235 22:11:09 -- nvmf/common.sh@116 -- # sync 00:11:13.235 22:11:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:13.235 22:11:09 -- nvmf/common.sh@119 -- # set +e 00:11:13.235 22:11:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:13.235 22:11:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:13.235 rmmod nvme_tcp 00:11:13.235 rmmod nvme_fabrics 00:11:13.235 rmmod nvme_keyring 00:11:13.235 22:11:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:13.235 22:11:09 -- nvmf/common.sh@123 -- # set -e 00:11:13.235 22:11:09 -- nvmf/common.sh@124 -- # return 0 00:11:13.235 22:11:09 -- nvmf/common.sh@477 -- # '[' -n 62061 ']' 00:11:13.235 22:11:09 -- nvmf/common.sh@478 -- # killprocess 62061 00:11:13.235 22:11:09 -- common/autotest_common.sh@936 -- # '[' -z 62061 ']' 00:11:13.235 22:11:09 -- common/autotest_common.sh@940 -- # kill -0 62061 00:11:13.235 22:11:09 -- common/autotest_common.sh@941 -- # uname 00:11:13.235 22:11:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:13.235 22:11:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62061 00:11:13.235 22:11:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:13.235 22:11:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:13.235 killing process with pid 62061 00:11:13.235 22:11:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62061' 00:11:13.235 22:11:09 -- common/autotest_common.sh@955 -- # kill 62061 00:11:13.235 22:11:09 -- common/autotest_common.sh@960 -- # wait 62061 00:11:13.235 22:11:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:13.235 22:11:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:13.235 22:11:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:13.235 22:11:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:13.235 22:11:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:13.235 22:11:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.235 22:11:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.235 22:11:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.235 22:11:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:13.235 00:11:13.235 real 3m47.296s 00:11:13.235 user 14m48.945s 00:11:13.235 sys 0m22.369s 00:11:13.235 22:11:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:13.235 22:11:09 -- common/autotest_common.sh@10 -- # set +x 00:11:13.235 ************************************ 00:11:13.235 END TEST nvmf_connect_disconnect 00:11:13.235 ************************************ 00:11:13.494 22:11:09 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:13.494 22:11:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:13.494 22:11:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:13.494 22:11:09 -- common/autotest_common.sh@10 -- # set +x 00:11:13.494 ************************************ 00:11:13.494 START TEST nvmf_multitarget 00:11:13.494 ************************************ 00:11:13.494 22:11:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:13.494 * Looking for test storage... 00:11:13.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:13.494 22:11:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:13.494 22:11:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:13.494 22:11:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:13.494 22:11:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:13.494 22:11:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:13.494 22:11:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:13.494 22:11:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:13.494 22:11:10 -- scripts/common.sh@335 -- # IFS=.-: 00:11:13.494 22:11:10 -- scripts/common.sh@335 -- # read -ra ver1 00:11:13.494 22:11:10 -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.494 22:11:10 -- scripts/common.sh@336 -- # read -ra ver2 00:11:13.494 22:11:10 -- scripts/common.sh@337 -- # local 'op=<' 00:11:13.494 22:11:10 -- scripts/common.sh@339 -- # ver1_l=2 00:11:13.494 22:11:10 -- scripts/common.sh@340 -- # ver2_l=1 00:11:13.494 22:11:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:13.494 22:11:10 -- scripts/common.sh@343 -- # case "$op" in 00:11:13.494 22:11:10 -- scripts/common.sh@344 -- # : 1 00:11:13.494 22:11:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:13.494 22:11:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.494 22:11:10 -- scripts/common.sh@364 -- # decimal 1 00:11:13.494 22:11:10 -- scripts/common.sh@352 -- # local d=1 00:11:13.494 22:11:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.494 22:11:10 -- scripts/common.sh@354 -- # echo 1 00:11:13.494 22:11:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:13.494 22:11:10 -- scripts/common.sh@365 -- # decimal 2 00:11:13.494 22:11:10 -- scripts/common.sh@352 -- # local d=2 00:11:13.494 22:11:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.494 22:11:10 -- scripts/common.sh@354 -- # echo 2 00:11:13.494 22:11:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:13.494 22:11:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:13.494 22:11:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:13.494 22:11:10 -- scripts/common.sh@367 -- # return 0 00:11:13.494 22:11:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.494 22:11:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:13.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.494 --rc genhtml_branch_coverage=1 00:11:13.494 --rc genhtml_function_coverage=1 00:11:13.494 --rc genhtml_legend=1 00:11:13.494 --rc geninfo_all_blocks=1 00:11:13.494 --rc geninfo_unexecuted_blocks=1 00:11:13.494 00:11:13.494 ' 00:11:13.494 22:11:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:13.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.494 --rc genhtml_branch_coverage=1 00:11:13.494 --rc genhtml_function_coverage=1 00:11:13.494 --rc genhtml_legend=1 00:11:13.494 --rc geninfo_all_blocks=1 00:11:13.494 --rc geninfo_unexecuted_blocks=1 00:11:13.494 00:11:13.494 ' 00:11:13.494 22:11:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:13.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.494 --rc genhtml_branch_coverage=1 00:11:13.494 --rc genhtml_function_coverage=1 00:11:13.494 --rc genhtml_legend=1 00:11:13.494 --rc geninfo_all_blocks=1 00:11:13.494 --rc geninfo_unexecuted_blocks=1 00:11:13.494 00:11:13.494 ' 00:11:13.494 22:11:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:13.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.494 --rc genhtml_branch_coverage=1 00:11:13.494 --rc genhtml_function_coverage=1 00:11:13.494 --rc genhtml_legend=1 00:11:13.494 --rc geninfo_all_blocks=1 00:11:13.494 --rc geninfo_unexecuted_blocks=1 00:11:13.494 00:11:13.494 ' 00:11:13.494 22:11:10 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:13.494 22:11:10 -- nvmf/common.sh@7 -- # uname -s 00:11:13.495 22:11:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.495 22:11:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.495 22:11:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.495 22:11:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.495 22:11:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.495 22:11:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.495 22:11:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.495 22:11:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.495 22:11:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.495 22:11:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.495 22:11:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:11:13.495 22:11:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:11:13.495 22:11:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.495 22:11:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.495 22:11:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:13.495 22:11:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:13.495 22:11:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.495 22:11:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.495 22:11:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.495 22:11:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.495 22:11:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.495 22:11:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.495 22:11:10 -- paths/export.sh@5 -- # export PATH 00:11:13.495 22:11:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.495 22:11:10 -- nvmf/common.sh@46 -- # : 0 00:11:13.495 22:11:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:13.495 22:11:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:13.495 22:11:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:13.495 22:11:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.495 22:11:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.495 22:11:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:13.495 22:11:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:13.495 22:11:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:13.495 22:11:10 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:13.495 22:11:10 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:13.495 22:11:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:13.495 22:11:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.495 22:11:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:13.495 22:11:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:13.495 22:11:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:13.495 22:11:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.495 22:11:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.495 22:11:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.495 22:11:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:13.495 22:11:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:13.495 22:11:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:13.495 22:11:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:13.495 22:11:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:13.495 22:11:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:13.495 22:11:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.495 22:11:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.495 22:11:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:13.495 22:11:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:13.495 22:11:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:13.495 22:11:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:13.495 22:11:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:13.495 22:11:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.495 22:11:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:13.495 22:11:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:13.495 22:11:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:13.495 22:11:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:13.495 22:11:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:13.495 22:11:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:13.495 Cannot find device "nvmf_tgt_br" 00:11:13.495 22:11:10 -- nvmf/common.sh@154 -- # true 00:11:13.495 22:11:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:13.754 Cannot find device "nvmf_tgt_br2" 00:11:13.754 22:11:10 -- nvmf/common.sh@155 -- # true 00:11:13.754 22:11:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:13.754 22:11:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:13.754 Cannot find device "nvmf_tgt_br" 00:11:13.754 22:11:10 -- nvmf/common.sh@157 -- # true 00:11:13.754 22:11:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:13.754 Cannot find device "nvmf_tgt_br2" 00:11:13.754 22:11:10 -- nvmf/common.sh@158 -- # true 00:11:13.754 22:11:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:13.754 22:11:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:13.754 22:11:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:13.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.754 22:11:10 -- nvmf/common.sh@161 -- # true 00:11:13.754 22:11:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:13.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.754 22:11:10 -- nvmf/common.sh@162 -- # true 00:11:13.754 22:11:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:13.754 22:11:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:13.754 22:11:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:13.754 22:11:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:13.754 22:11:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:13.754 22:11:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:13.754 22:11:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:13.754 22:11:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:13.754 22:11:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:13.754 22:11:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:13.754 22:11:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:13.754 22:11:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:13.754 22:11:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:13.754 22:11:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:13.754 22:11:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:13.754 22:11:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:13.754 22:11:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:13.754 22:11:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:13.754 22:11:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:14.013 22:11:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:14.013 22:11:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:14.013 22:11:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:14.013 22:11:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:14.013 22:11:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:14.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:11:14.013 00:11:14.013 --- 10.0.0.2 ping statistics --- 00:11:14.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.013 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:14.013 22:11:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:14.013 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:14.013 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:14.013 00:11:14.013 --- 10.0.0.3 ping statistics --- 00:11:14.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.013 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:14.013 22:11:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:14.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:14.013 00:11:14.013 --- 10.0.0.1 ping statistics --- 00:11:14.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.013 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:14.013 22:11:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.013 22:11:10 -- nvmf/common.sh@421 -- # return 0 00:11:14.013 22:11:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:14.013 22:11:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.013 22:11:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:14.013 22:11:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:14.013 22:11:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.013 22:11:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:14.013 22:11:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:14.013 22:11:10 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:14.013 22:11:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:14.013 22:11:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:14.013 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:11:14.013 22:11:10 -- nvmf/common.sh@469 -- # nvmfpid=65857 00:11:14.013 22:11:10 -- nvmf/common.sh@470 -- # waitforlisten 65857 00:11:14.013 22:11:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.013 22:11:10 -- common/autotest_common.sh@829 -- # '[' -z 65857 ']' 00:11:14.013 22:11:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.013 22:11:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:14.013 22:11:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.013 22:11:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:14.013 22:11:10 -- common/autotest_common.sh@10 -- # set +x 00:11:14.013 [2024-11-17 22:11:10.497815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:14.013 [2024-11-17 22:11:10.497969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.272 [2024-11-17 22:11:10.642259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.272 [2024-11-17 22:11:10.759635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:14.272 [2024-11-17 22:11:10.759837] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.272 [2024-11-17 22:11:10.759856] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.272 [2024-11-17 22:11:10.759868] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.272 [2024-11-17 22:11:10.760065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.272 [2024-11-17 22:11:10.760217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.272 [2024-11-17 22:11:10.760329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.272 [2024-11-17 22:11:10.760336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.839 22:11:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:14.839 22:11:11 -- common/autotest_common.sh@862 -- # return 0 00:11:14.839 22:11:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:14.839 22:11:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:14.839 22:11:11 -- common/autotest_common.sh@10 -- # set +x 00:11:14.839 22:11:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.839 22:11:11 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:14.839 22:11:11 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:14.839 22:11:11 -- target/multitarget.sh@21 -- # jq length 00:11:15.098 22:11:11 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:15.098 22:11:11 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:15.098 "nvmf_tgt_1" 00:11:15.357 22:11:11 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:15.357 "nvmf_tgt_2" 00:11:15.357 22:11:11 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:15.357 22:11:11 -- target/multitarget.sh@28 -- # jq length 00:11:15.615 22:11:11 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:15.615 22:11:11 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:15.615 true 00:11:15.615 22:11:12 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:15.874 true 00:11:15.874 22:11:12 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:15.874 22:11:12 -- target/multitarget.sh@35 -- # jq length 00:11:15.874 22:11:12 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:15.874 22:11:12 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:15.874 22:11:12 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:15.874 22:11:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:15.874 22:11:12 -- nvmf/common.sh@116 -- # sync 00:11:15.874 22:11:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:15.874 22:11:12 -- nvmf/common.sh@119 -- # set +e 00:11:15.874 22:11:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:15.874 22:11:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:15.874 rmmod nvme_tcp 00:11:15.874 rmmod nvme_fabrics 00:11:16.133 rmmod nvme_keyring 00:11:16.133 22:11:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:16.133 22:11:12 -- nvmf/common.sh@123 -- # set -e 00:11:16.133 22:11:12 -- nvmf/common.sh@124 -- # return 0 00:11:16.133 22:11:12 -- nvmf/common.sh@477 -- # '[' -n 65857 ']' 00:11:16.133 22:11:12 -- nvmf/common.sh@478 -- # killprocess 65857 00:11:16.133 22:11:12 -- common/autotest_common.sh@936 -- # '[' -z 65857 ']' 00:11:16.133 22:11:12 -- common/autotest_common.sh@940 -- # kill -0 65857 00:11:16.133 22:11:12 -- common/autotest_common.sh@941 -- # uname 00:11:16.133 22:11:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:16.133 22:11:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65857 00:11:16.133 22:11:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:16.133 22:11:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:16.133 killing process with pid 65857 00:11:16.133 22:11:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65857' 00:11:16.133 22:11:12 -- common/autotest_common.sh@955 -- # kill 65857 00:11:16.133 22:11:12 -- common/autotest_common.sh@960 -- # wait 65857 00:11:16.392 22:11:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:16.392 22:11:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:16.392 22:11:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:16.392 22:11:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.392 22:11:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:16.392 22:11:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.392 22:11:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.392 22:11:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.392 22:11:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:16.392 ************************************ 00:11:16.392 END TEST nvmf_multitarget 00:11:16.392 ************************************ 00:11:16.392 00:11:16.392 real 0m3.017s 00:11:16.392 user 0m9.508s 00:11:16.392 sys 0m0.719s 00:11:16.392 22:11:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:16.392 22:11:12 -- common/autotest_common.sh@10 -- # set +x 00:11:16.392 22:11:12 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:16.392 22:11:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:16.392 22:11:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.392 22:11:12 -- common/autotest_common.sh@10 -- # set +x 00:11:16.392 ************************************ 00:11:16.392 START TEST nvmf_rpc 00:11:16.392 ************************************ 00:11:16.392 22:11:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:16.651 * Looking for test storage... 00:11:16.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:16.651 22:11:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:16.651 22:11:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:16.651 22:11:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:16.651 22:11:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:16.652 22:11:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:16.652 22:11:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:16.652 22:11:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:16.652 22:11:13 -- scripts/common.sh@335 -- # IFS=.-: 00:11:16.652 22:11:13 -- scripts/common.sh@335 -- # read -ra ver1 00:11:16.652 22:11:13 -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.652 22:11:13 -- scripts/common.sh@336 -- # read -ra ver2 00:11:16.652 22:11:13 -- scripts/common.sh@337 -- # local 'op=<' 00:11:16.652 22:11:13 -- scripts/common.sh@339 -- # ver1_l=2 00:11:16.652 22:11:13 -- scripts/common.sh@340 -- # ver2_l=1 00:11:16.652 22:11:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:16.652 22:11:13 -- scripts/common.sh@343 -- # case "$op" in 00:11:16.652 22:11:13 -- scripts/common.sh@344 -- # : 1 00:11:16.652 22:11:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:16.652 22:11:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.652 22:11:13 -- scripts/common.sh@364 -- # decimal 1 00:11:16.652 22:11:13 -- scripts/common.sh@352 -- # local d=1 00:11:16.652 22:11:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.652 22:11:13 -- scripts/common.sh@354 -- # echo 1 00:11:16.652 22:11:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:16.652 22:11:13 -- scripts/common.sh@365 -- # decimal 2 00:11:16.652 22:11:13 -- scripts/common.sh@352 -- # local d=2 00:11:16.652 22:11:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.652 22:11:13 -- scripts/common.sh@354 -- # echo 2 00:11:16.652 22:11:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:16.652 22:11:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:16.652 22:11:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:16.652 22:11:13 -- scripts/common.sh@367 -- # return 0 00:11:16.652 22:11:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.652 22:11:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:16.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.652 --rc genhtml_branch_coverage=1 00:11:16.652 --rc genhtml_function_coverage=1 00:11:16.652 --rc genhtml_legend=1 00:11:16.652 --rc geninfo_all_blocks=1 00:11:16.652 --rc geninfo_unexecuted_blocks=1 00:11:16.652 00:11:16.652 ' 00:11:16.652 22:11:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:16.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.652 --rc genhtml_branch_coverage=1 00:11:16.652 --rc genhtml_function_coverage=1 00:11:16.652 --rc genhtml_legend=1 00:11:16.652 --rc geninfo_all_blocks=1 00:11:16.652 --rc geninfo_unexecuted_blocks=1 00:11:16.652 00:11:16.652 ' 00:11:16.652 22:11:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:16.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.652 --rc genhtml_branch_coverage=1 00:11:16.652 --rc genhtml_function_coverage=1 00:11:16.652 --rc genhtml_legend=1 00:11:16.652 --rc geninfo_all_blocks=1 00:11:16.652 --rc geninfo_unexecuted_blocks=1 00:11:16.652 00:11:16.652 ' 00:11:16.652 22:11:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:16.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.652 --rc genhtml_branch_coverage=1 00:11:16.652 --rc genhtml_function_coverage=1 00:11:16.652 --rc genhtml_legend=1 00:11:16.652 --rc geninfo_all_blocks=1 00:11:16.652 --rc geninfo_unexecuted_blocks=1 00:11:16.652 00:11:16.652 ' 00:11:16.652 22:11:13 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.652 22:11:13 -- nvmf/common.sh@7 -- # uname -s 00:11:16.652 22:11:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.652 22:11:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.652 22:11:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.652 22:11:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.652 22:11:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.652 22:11:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.652 22:11:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.652 22:11:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.652 22:11:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.652 22:11:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.652 22:11:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:11:16.652 22:11:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:11:16.652 22:11:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.652 22:11:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.652 22:11:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:16.652 22:11:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.652 22:11:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.652 22:11:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.652 22:11:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.652 22:11:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.652 22:11:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.652 22:11:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.652 22:11:13 -- paths/export.sh@5 -- # export PATH 00:11:16.652 22:11:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.652 22:11:13 -- nvmf/common.sh@46 -- # : 0 00:11:16.652 22:11:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:16.652 22:11:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:16.652 22:11:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:16.652 22:11:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.652 22:11:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.652 22:11:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:16.652 22:11:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:16.652 22:11:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:16.652 22:11:13 -- target/rpc.sh@11 -- # loops=5 00:11:16.652 22:11:13 -- target/rpc.sh@23 -- # nvmftestinit 00:11:16.652 22:11:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:16.652 22:11:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.652 22:11:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:16.652 22:11:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:16.652 22:11:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:16.652 22:11:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.652 22:11:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.652 22:11:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.652 22:11:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:16.652 22:11:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:16.652 22:11:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:16.652 22:11:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:16.652 22:11:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:16.652 22:11:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:16.652 22:11:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.652 22:11:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.652 22:11:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:16.652 22:11:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:16.652 22:11:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:16.652 22:11:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:16.652 22:11:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:16.652 22:11:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.652 22:11:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:16.652 22:11:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:16.652 22:11:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:16.652 22:11:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:16.652 22:11:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:16.652 22:11:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:16.652 Cannot find device "nvmf_tgt_br" 00:11:16.652 22:11:13 -- nvmf/common.sh@154 -- # true 00:11:16.652 22:11:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.652 Cannot find device "nvmf_tgt_br2" 00:11:16.652 22:11:13 -- nvmf/common.sh@155 -- # true 00:11:16.652 22:11:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:16.652 22:11:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:16.652 Cannot find device "nvmf_tgt_br" 00:11:16.652 22:11:13 -- nvmf/common.sh@157 -- # true 00:11:16.652 22:11:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:16.652 Cannot find device "nvmf_tgt_br2" 00:11:16.652 22:11:13 -- nvmf/common.sh@158 -- # true 00:11:16.652 22:11:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:16.912 22:11:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:16.912 22:11:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.912 22:11:13 -- nvmf/common.sh@161 -- # true 00:11:16.912 22:11:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.912 22:11:13 -- nvmf/common.sh@162 -- # true 00:11:16.912 22:11:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.912 22:11:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.912 22:11:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.912 22:11:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.912 22:11:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.912 22:11:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.912 22:11:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.912 22:11:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:16.912 22:11:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:16.912 22:11:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:16.912 22:11:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:16.912 22:11:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:16.912 22:11:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:16.912 22:11:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:16.912 22:11:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:16.912 22:11:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:16.912 22:11:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:16.912 22:11:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:16.912 22:11:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:16.912 22:11:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:16.912 22:11:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:16.912 22:11:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:16.912 22:11:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:16.912 22:11:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:16.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:11:16.912 00:11:16.912 --- 10.0.0.2 ping statistics --- 00:11:16.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.912 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:16.912 22:11:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:16.912 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:16.912 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:16.912 00:11:16.912 --- 10.0.0.3 ping statistics --- 00:11:16.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.912 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:16.912 22:11:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:16.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:16.912 00:11:16.912 --- 10.0.0.1 ping statistics --- 00:11:16.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.912 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:16.912 22:11:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.912 22:11:13 -- nvmf/common.sh@421 -- # return 0 00:11:16.912 22:11:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:16.912 22:11:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.912 22:11:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:16.912 22:11:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:16.912 22:11:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.912 22:11:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:16.912 22:11:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:16.912 22:11:13 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:16.912 22:11:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:16.912 22:11:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:16.912 22:11:13 -- common/autotest_common.sh@10 -- # set +x 00:11:16.912 22:11:13 -- nvmf/common.sh@469 -- # nvmfpid=66097 00:11:16.912 22:11:13 -- nvmf/common.sh@470 -- # waitforlisten 66097 00:11:16.912 22:11:13 -- common/autotest_common.sh@829 -- # '[' -z 66097 ']' 00:11:16.912 22:11:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.912 22:11:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.912 22:11:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.912 22:11:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.912 22:11:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.912 22:11:13 -- common/autotest_common.sh@10 -- # set +x 00:11:17.172 [2024-11-17 22:11:13.582379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:17.172 [2024-11-17 22:11:13.582463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.172 [2024-11-17 22:11:13.723448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.431 [2024-11-17 22:11:13.825344] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:17.431 [2024-11-17 22:11:13.825470] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.431 [2024-11-17 22:11:13.825482] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.431 [2024-11-17 22:11:13.825489] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.431 [2024-11-17 22:11:13.825603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.431 [2024-11-17 22:11:13.825887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.431 [2024-11-17 22:11:13.825950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.431 [2024-11-17 22:11:13.825952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.368 22:11:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:18.368 22:11:14 -- common/autotest_common.sh@862 -- # return 0 00:11:18.368 22:11:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:18.368 22:11:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:18.368 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:11:18.368 22:11:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.368 22:11:14 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:18.368 22:11:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.368 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:11:18.368 22:11:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.368 22:11:14 -- target/rpc.sh@26 -- # stats='{ 00:11:18.368 "poll_groups": [ 00:11:18.368 { 00:11:18.368 "admin_qpairs": 0, 00:11:18.368 "completed_nvme_io": 0, 00:11:18.368 "current_admin_qpairs": 0, 00:11:18.368 "current_io_qpairs": 0, 00:11:18.368 "io_qpairs": 0, 00:11:18.368 "name": "nvmf_tgt_poll_group_0", 00:11:18.368 "pending_bdev_io": 0, 00:11:18.368 "transports": [] 00:11:18.368 }, 00:11:18.368 { 00:11:18.368 "admin_qpairs": 0, 00:11:18.368 "completed_nvme_io": 0, 00:11:18.368 "current_admin_qpairs": 0, 00:11:18.368 "current_io_qpairs": 0, 00:11:18.368 "io_qpairs": 0, 00:11:18.368 "name": "nvmf_tgt_poll_group_1", 00:11:18.368 "pending_bdev_io": 0, 00:11:18.368 "transports": [] 00:11:18.368 }, 00:11:18.368 { 00:11:18.368 "admin_qpairs": 0, 00:11:18.368 "completed_nvme_io": 0, 00:11:18.368 "current_admin_qpairs": 0, 00:11:18.368 "current_io_qpairs": 0, 00:11:18.368 "io_qpairs": 0, 00:11:18.368 "name": "nvmf_tgt_poll_group_2", 00:11:18.368 "pending_bdev_io": 0, 00:11:18.368 "transports": [] 00:11:18.368 }, 00:11:18.368 { 00:11:18.368 "admin_qpairs": 0, 00:11:18.368 "completed_nvme_io": 0, 00:11:18.368 "current_admin_qpairs": 0, 00:11:18.368 "current_io_qpairs": 0, 00:11:18.368 "io_qpairs": 0, 00:11:18.368 "name": "nvmf_tgt_poll_group_3", 00:11:18.368 "pending_bdev_io": 0, 00:11:18.368 "transports": [] 00:11:18.368 } 00:11:18.368 ], 00:11:18.368 "tick_rate": 2200000000 00:11:18.368 }' 00:11:18.368 22:11:14 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:18.368 22:11:14 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:18.368 22:11:14 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:18.368 22:11:14 -- target/rpc.sh@15 -- # wc -l 00:11:18.368 22:11:14 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:18.368 22:11:14 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:18.368 22:11:14 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:18.369 22:11:14 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.369 22:11:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.369 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:11:18.369 [2024-11-17 22:11:14.808047] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.369 22:11:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.369 22:11:14 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:18.369 22:11:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.369 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:11:18.369 22:11:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.369 22:11:14 -- target/rpc.sh@33 -- # stats='{ 00:11:18.369 "poll_groups": [ 00:11:18.369 { 00:11:18.369 "admin_qpairs": 0, 00:11:18.369 "completed_nvme_io": 0, 00:11:18.369 "current_admin_qpairs": 0, 00:11:18.369 "current_io_qpairs": 0, 00:11:18.369 "io_qpairs": 0, 00:11:18.369 "name": "nvmf_tgt_poll_group_0", 00:11:18.369 "pending_bdev_io": 0, 00:11:18.369 "transports": [ 00:11:18.369 { 00:11:18.369 "trtype": "TCP" 00:11:18.369 } 00:11:18.369 ] 00:11:18.369 }, 00:11:18.369 { 00:11:18.369 "admin_qpairs": 0, 00:11:18.369 "completed_nvme_io": 0, 00:11:18.369 "current_admin_qpairs": 0, 00:11:18.369 "current_io_qpairs": 0, 00:11:18.369 "io_qpairs": 0, 00:11:18.369 "name": "nvmf_tgt_poll_group_1", 00:11:18.369 "pending_bdev_io": 0, 00:11:18.369 "transports": [ 00:11:18.369 { 00:11:18.369 "trtype": "TCP" 00:11:18.369 } 00:11:18.369 ] 00:11:18.369 }, 00:11:18.369 { 00:11:18.369 "admin_qpairs": 0, 00:11:18.369 "completed_nvme_io": 0, 00:11:18.369 "current_admin_qpairs": 0, 00:11:18.369 "current_io_qpairs": 0, 00:11:18.369 "io_qpairs": 0, 00:11:18.369 "name": "nvmf_tgt_poll_group_2", 00:11:18.369 "pending_bdev_io": 0, 00:11:18.369 "transports": [ 00:11:18.369 { 00:11:18.369 "trtype": "TCP" 00:11:18.369 } 00:11:18.369 ] 00:11:18.369 }, 00:11:18.369 { 00:11:18.369 "admin_qpairs": 0, 00:11:18.369 "completed_nvme_io": 0, 00:11:18.369 "current_admin_qpairs": 0, 00:11:18.369 "current_io_qpairs": 0, 00:11:18.369 "io_qpairs": 0, 00:11:18.369 "name": "nvmf_tgt_poll_group_3", 00:11:18.369 "pending_bdev_io": 0, 00:11:18.369 "transports": [ 00:11:18.369 { 00:11:18.369 "trtype": "TCP" 00:11:18.369 } 00:11:18.369 ] 00:11:18.369 } 00:11:18.369 ], 00:11:18.369 "tick_rate": 2200000000 00:11:18.369 }' 00:11:18.369 22:11:14 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:18.369 22:11:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:18.369 22:11:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:18.369 22:11:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:18.369 22:11:14 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:18.369 22:11:14 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:18.369 22:11:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:18.369 22:11:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:18.369 22:11:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:18.369 22:11:14 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:18.369 22:11:14 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:18.369 22:11:14 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:18.369 22:11:14 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:18.369 22:11:14 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:18.369 22:11:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.369 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:11:18.628 Malloc1 00:11:18.628 22:11:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.628 22:11:14 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:18.628 22:11:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.628 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:11:18.628 22:11:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.628 22:11:15 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.628 22:11:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.628 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:11:18.628 22:11:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.628 22:11:15 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:18.628 22:11:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.628 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:11:18.628 22:11:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.628 22:11:15 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.628 22:11:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.628 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:11:18.628 [2024-11-17 22:11:15.022313] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.628 22:11:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.628 22:11:15 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 -a 10.0.0.2 -s 4420 00:11:18.628 22:11:15 -- common/autotest_common.sh@650 -- # local es=0 00:11:18.628 22:11:15 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 -a 10.0.0.2 -s 4420 00:11:18.628 22:11:15 -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:18.628 22:11:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.628 22:11:15 -- common/autotest_common.sh@642 -- # type -t nvme 00:11:18.628 22:11:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.628 22:11:15 -- common/autotest_common.sh@644 -- # type -P nvme 00:11:18.628 22:11:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.628 22:11:15 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:18.628 22:11:15 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:18.628 22:11:15 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 -a 10.0.0.2 -s 4420 00:11:18.628 [2024-11-17 22:11:15.044643] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671' 00:11:18.628 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:18.628 could not add new controller: failed to write to nvme-fabrics device 00:11:18.629 22:11:15 -- common/autotest_common.sh@653 -- # es=1 00:11:18.629 22:11:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:18.629 22:11:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:18.629 22:11:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:18.629 22:11:15 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:11:18.629 22:11:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.629 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:11:18.629 22:11:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.629 22:11:15 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.629 22:11:15 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.629 22:11:15 -- common/autotest_common.sh@1187 -- # local i=0 00:11:18.629 22:11:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.629 22:11:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:18.629 22:11:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:21.164 22:11:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:21.164 22:11:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:21.164 22:11:17 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.164 22:11:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:21.164 22:11:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.164 22:11:17 -- common/autotest_common.sh@1197 -- # return 0 00:11:21.164 22:11:17 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.164 22:11:17 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.164 22:11:17 -- common/autotest_common.sh@1208 -- # local i=0 00:11:21.164 22:11:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:21.164 22:11:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.164 22:11:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.164 22:11:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:21.164 22:11:17 -- common/autotest_common.sh@1220 -- # return 0 00:11:21.164 22:11:17 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:11:21.164 22:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.164 22:11:17 -- common/autotest_common.sh@10 -- # set +x 00:11:21.164 22:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.164 22:11:17 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.164 22:11:17 -- common/autotest_common.sh@650 -- # local es=0 00:11:21.164 22:11:17 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.164 22:11:17 -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:21.164 22:11:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.164 22:11:17 -- common/autotest_common.sh@642 -- # type -t nvme 00:11:21.164 22:11:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.164 22:11:17 -- common/autotest_common.sh@644 -- # type -P nvme 00:11:21.164 22:11:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.164 22:11:17 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:21.164 22:11:17 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:21.164 22:11:17 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.164 [2024-11-17 22:11:17.376127] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671' 00:11:21.164 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:21.165 could not add new controller: failed to write to nvme-fabrics device 00:11:21.165 22:11:17 -- common/autotest_common.sh@653 -- # es=1 00:11:21.165 22:11:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:21.165 22:11:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:21.165 22:11:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:21.165 22:11:17 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:21.165 22:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.165 22:11:17 -- common/autotest_common.sh@10 -- # set +x 00:11:21.165 22:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.165 22:11:17 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.165 22:11:17 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.165 22:11:17 -- common/autotest_common.sh@1187 -- # local i=0 00:11:21.165 22:11:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.165 22:11:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:21.165 22:11:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:23.126 22:11:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:23.126 22:11:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:23.126 22:11:19 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.126 22:11:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:23.126 22:11:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.126 22:11:19 -- common/autotest_common.sh@1197 -- # return 0 00:11:23.126 22:11:19 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.126 22:11:19 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.126 22:11:19 -- common/autotest_common.sh@1208 -- # local i=0 00:11:23.410 22:11:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:23.410 22:11:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.410 22:11:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.411 22:11:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:23.411 22:11:19 -- common/autotest_common.sh@1220 -- # return 0 00:11:23.411 22:11:19 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.411 22:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.411 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:11:23.411 22:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.411 22:11:19 -- target/rpc.sh@81 -- # seq 1 5 00:11:23.411 22:11:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:23.411 22:11:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:23.411 22:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.411 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:11:23.411 22:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.411 22:11:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.411 22:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.411 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:11:23.411 [2024-11-17 22:11:19.791858] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.411 22:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.411 22:11:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:23.411 22:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.411 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:11:23.411 22:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.411 22:11:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:23.411 22:11:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.411 22:11:19 -- common/autotest_common.sh@10 -- # set +x 00:11:23.411 22:11:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.411 22:11:19 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.411 22:11:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.411 22:11:19 -- common/autotest_common.sh@1187 -- # local i=0 00:11:23.411 22:11:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.411 22:11:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:23.411 22:11:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:25.951 22:11:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:25.951 22:11:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:25.951 22:11:21 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.951 22:11:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:25.951 22:11:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.951 22:11:22 -- common/autotest_common.sh@1197 -- # return 0 00:11:25.951 22:11:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.951 22:11:22 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.951 22:11:22 -- common/autotest_common.sh@1208 -- # local i=0 00:11:25.951 22:11:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:25.951 22:11:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.951 22:11:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:25.951 22:11:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.951 22:11:22 -- common/autotest_common.sh@1220 -- # return 0 00:11:25.951 22:11:22 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:25.951 22:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.951 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:11:25.951 22:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.951 22:11:22 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.951 22:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.951 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:11:25.951 22:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.951 22:11:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:25.951 22:11:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:25.951 22:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.951 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:11:25.951 22:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.951 22:11:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.951 22:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.951 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:11:25.951 [2024-11-17 22:11:22.104456] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.951 22:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.951 22:11:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:25.951 22:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.951 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:11:25.951 22:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.951 22:11:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:25.951 22:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.951 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:11:25.951 22:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.951 22:11:22 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.951 22:11:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:25.951 22:11:22 -- common/autotest_common.sh@1187 -- # local i=0 00:11:25.951 22:11:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.951 22:11:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:25.951 22:11:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:27.855 22:11:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:27.855 22:11:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:27.855 22:11:24 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.855 22:11:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:27.855 22:11:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.855 22:11:24 -- common/autotest_common.sh@1197 -- # return 0 00:11:27.855 22:11:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.114 22:11:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.114 22:11:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:28.114 22:11:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:28.114 22:11:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.114 22:11:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:28.114 22:11:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.114 22:11:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:28.114 22:11:24 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.114 22:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.114 22:11:24 -- common/autotest_common.sh@10 -- # set +x 00:11:28.114 22:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.114 22:11:24 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.114 22:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.114 22:11:24 -- common/autotest_common.sh@10 -- # set +x 00:11:28.114 22:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.114 22:11:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:28.114 22:11:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.114 22:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.114 22:11:24 -- common/autotest_common.sh@10 -- # set +x 00:11:28.114 22:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.114 22:11:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.114 22:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.114 22:11:24 -- common/autotest_common.sh@10 -- # set +x 00:11:28.114 [2024-11-17 22:11:24.525319] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.114 22:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.114 22:11:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:28.114 22:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.114 22:11:24 -- common/autotest_common.sh@10 -- # set +x 00:11:28.114 22:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.114 22:11:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.114 22:11:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.114 22:11:24 -- common/autotest_common.sh@10 -- # set +x 00:11:28.114 22:11:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.115 22:11:24 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.115 22:11:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.115 22:11:24 -- common/autotest_common.sh@1187 -- # local i=0 00:11:28.115 22:11:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.115 22:11:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:28.115 22:11:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:30.645 22:11:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:30.645 22:11:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:30.645 22:11:26 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.645 22:11:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:30.645 22:11:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.645 22:11:26 -- common/autotest_common.sh@1197 -- # return 0 00:11:30.645 22:11:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.645 22:11:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.645 22:11:26 -- common/autotest_common.sh@1208 -- # local i=0 00:11:30.645 22:11:26 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:30.645 22:11:26 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.645 22:11:26 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:30.645 22:11:26 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.645 22:11:26 -- common/autotest_common.sh@1220 -- # return 0 00:11:30.645 22:11:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.645 22:11:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.645 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:11:30.645 22:11:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.645 22:11:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.645 22:11:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.645 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:11:30.645 22:11:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.645 22:11:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:30.645 22:11:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:30.645 22:11:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.645 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:11:30.645 22:11:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.645 22:11:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.645 22:11:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.645 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:11:30.645 [2024-11-17 22:11:26.950119] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.645 22:11:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.645 22:11:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:30.645 22:11:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.645 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:11:30.645 22:11:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.645 22:11:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:30.645 22:11:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.645 22:11:26 -- common/autotest_common.sh@10 -- # set +x 00:11:30.645 22:11:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.645 22:11:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.645 22:11:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.645 22:11:27 -- common/autotest_common.sh@1187 -- # local i=0 00:11:30.645 22:11:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.645 22:11:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:30.645 22:11:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:32.549 22:11:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:32.549 22:11:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:32.549 22:11:29 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.807 22:11:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:32.807 22:11:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.807 22:11:29 -- common/autotest_common.sh@1197 -- # return 0 00:11:32.807 22:11:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.807 22:11:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.807 22:11:29 -- common/autotest_common.sh@1208 -- # local i=0 00:11:32.807 22:11:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:32.807 22:11:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.807 22:11:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:32.807 22:11:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.807 22:11:29 -- common/autotest_common.sh@1220 -- # return 0 00:11:32.807 22:11:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.807 22:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.807 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:11:32.807 22:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.807 22:11:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.807 22:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.807 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:11:32.807 22:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.807 22:11:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:32.807 22:11:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:32.807 22:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.807 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:11:32.807 22:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.807 22:11:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.807 22:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.807 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:11:32.807 [2024-11-17 22:11:29.386530] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.807 22:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.807 22:11:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:32.807 22:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.807 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:11:32.807 22:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.807 22:11:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:32.807 22:11:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.807 22:11:29 -- common/autotest_common.sh@10 -- # set +x 00:11:32.807 22:11:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.807 22:11:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.066 22:11:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.066 22:11:29 -- common/autotest_common.sh@1187 -- # local i=0 00:11:33.066 22:11:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.066 22:11:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:33.066 22:11:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:34.970 22:11:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:35.229 22:11:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:35.229 22:11:31 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.229 22:11:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:35.229 22:11:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.229 22:11:31 -- common/autotest_common.sh@1197 -- # return 0 00:11:35.229 22:11:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.229 22:11:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.229 22:11:31 -- common/autotest_common.sh@1208 -- # local i=0 00:11:35.229 22:11:31 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:35.229 22:11:31 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.229 22:11:31 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.229 22:11:31 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:35.229 22:11:31 -- common/autotest_common.sh@1220 -- # return 0 00:11:35.229 22:11:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:35.229 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.229 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.229 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.229 22:11:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.229 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.229 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.229 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.229 22:11:31 -- target/rpc.sh@99 -- # seq 1 5 00:11:35.229 22:11:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.229 22:11:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.229 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.229 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.229 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.229 22:11:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.229 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.229 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.229 [2024-11-17 22:11:31.815169] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.229 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.229 22:11:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.229 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.229 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.229 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.229 22:11:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.229 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.229 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.229 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.229 22:11:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.229 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.229 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 22:11:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.488 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 22:11:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.488 22:11:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.488 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 22:11:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.488 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 [2024-11-17 22:11:31.863200] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 22:11:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.488 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 22:11:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.488 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 22:11:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.488 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 22:11:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.488 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 22:11:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.488 22:11:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.488 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.488 22:11:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.488 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.488 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 [2024-11-17 22:11:31.915229] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.488 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.489 22:11:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 [2024-11-17 22:11:31.963267] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:31 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.489 22:11:31 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.489 22:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:32 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.489 22:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 [2024-11-17 22:11:32.011337] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.489 22:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:32 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.489 22:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:32 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.489 22:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:32 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.489 22:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:32 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.489 22:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:32 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:35.489 22:11:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.489 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:11:35.489 22:11:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.489 22:11:32 -- target/rpc.sh@110 -- # stats='{ 00:11:35.489 "poll_groups": [ 00:11:35.489 { 00:11:35.489 "admin_qpairs": 2, 00:11:35.489 "completed_nvme_io": 115, 00:11:35.489 "current_admin_qpairs": 0, 00:11:35.489 "current_io_qpairs": 0, 00:11:35.489 "io_qpairs": 16, 00:11:35.489 "name": "nvmf_tgt_poll_group_0", 00:11:35.489 "pending_bdev_io": 0, 00:11:35.489 "transports": [ 00:11:35.489 { 00:11:35.489 "trtype": "TCP" 00:11:35.489 } 00:11:35.489 ] 00:11:35.489 }, 00:11:35.489 { 00:11:35.489 "admin_qpairs": 3, 00:11:35.489 "completed_nvme_io": 167, 00:11:35.489 "current_admin_qpairs": 0, 00:11:35.489 "current_io_qpairs": 0, 00:11:35.489 "io_qpairs": 17, 00:11:35.489 "name": "nvmf_tgt_poll_group_1", 00:11:35.489 "pending_bdev_io": 0, 00:11:35.489 "transports": [ 00:11:35.489 { 00:11:35.489 "trtype": "TCP" 00:11:35.489 } 00:11:35.489 ] 00:11:35.489 }, 00:11:35.489 { 00:11:35.489 "admin_qpairs": 1, 00:11:35.489 "completed_nvme_io": 69, 00:11:35.489 "current_admin_qpairs": 0, 00:11:35.489 "current_io_qpairs": 0, 00:11:35.489 "io_qpairs": 19, 00:11:35.489 "name": "nvmf_tgt_poll_group_2", 00:11:35.489 "pending_bdev_io": 0, 00:11:35.489 "transports": [ 00:11:35.489 { 00:11:35.489 "trtype": "TCP" 00:11:35.489 } 00:11:35.489 ] 00:11:35.489 }, 00:11:35.489 { 00:11:35.489 "admin_qpairs": 1, 00:11:35.489 "completed_nvme_io": 69, 00:11:35.489 "current_admin_qpairs": 0, 00:11:35.489 "current_io_qpairs": 0, 00:11:35.489 "io_qpairs": 18, 00:11:35.489 "name": "nvmf_tgt_poll_group_3", 00:11:35.489 "pending_bdev_io": 0, 00:11:35.489 "transports": [ 00:11:35.489 { 00:11:35.489 "trtype": "TCP" 00:11:35.489 } 00:11:35.489 ] 00:11:35.489 } 00:11:35.489 ], 00:11:35.489 "tick_rate": 2200000000 00:11:35.489 }' 00:11:35.489 22:11:32 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:35.489 22:11:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:35.489 22:11:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:35.489 22:11:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:35.748 22:11:32 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:35.748 22:11:32 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:35.748 22:11:32 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:35.748 22:11:32 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:35.748 22:11:32 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:35.748 22:11:32 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:35.748 22:11:32 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:35.748 22:11:32 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:35.748 22:11:32 -- target/rpc.sh@123 -- # nvmftestfini 00:11:35.749 22:11:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:35.749 22:11:32 -- nvmf/common.sh@116 -- # sync 00:11:35.749 22:11:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:35.749 22:11:32 -- nvmf/common.sh@119 -- # set +e 00:11:35.749 22:11:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:35.749 22:11:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:35.749 rmmod nvme_tcp 00:11:35.749 rmmod nvme_fabrics 00:11:35.749 rmmod nvme_keyring 00:11:35.749 22:11:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:35.749 22:11:32 -- nvmf/common.sh@123 -- # set -e 00:11:35.749 22:11:32 -- nvmf/common.sh@124 -- # return 0 00:11:35.749 22:11:32 -- nvmf/common.sh@477 -- # '[' -n 66097 ']' 00:11:35.749 22:11:32 -- nvmf/common.sh@478 -- # killprocess 66097 00:11:35.749 22:11:32 -- common/autotest_common.sh@936 -- # '[' -z 66097 ']' 00:11:35.749 22:11:32 -- common/autotest_common.sh@940 -- # kill -0 66097 00:11:35.749 22:11:32 -- common/autotest_common.sh@941 -- # uname 00:11:35.749 22:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:35.749 22:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66097 00:11:35.749 22:11:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:35.749 22:11:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:35.749 22:11:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66097' 00:11:35.749 killing process with pid 66097 00:11:35.749 22:11:32 -- common/autotest_common.sh@955 -- # kill 66097 00:11:35.749 22:11:32 -- common/autotest_common.sh@960 -- # wait 66097 00:11:36.316 22:11:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:36.316 22:11:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:36.316 22:11:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:36.316 22:11:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.316 22:11:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:36.317 22:11:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.317 22:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.317 22:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.317 22:11:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:36.317 00:11:36.317 real 0m19.764s 00:11:36.317 user 1m15.140s 00:11:36.317 sys 0m1.991s 00:11:36.317 22:11:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:36.317 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:11:36.317 ************************************ 00:11:36.317 END TEST nvmf_rpc 00:11:36.317 ************************************ 00:11:36.317 22:11:32 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:36.317 22:11:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:36.317 22:11:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.317 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:11:36.317 ************************************ 00:11:36.317 START TEST nvmf_invalid 00:11:36.317 ************************************ 00:11:36.317 22:11:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:36.317 * Looking for test storage... 00:11:36.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:36.317 22:11:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:36.317 22:11:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:36.317 22:11:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:36.317 22:11:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:36.317 22:11:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:36.317 22:11:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:36.317 22:11:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:36.317 22:11:32 -- scripts/common.sh@335 -- # IFS=.-: 00:11:36.317 22:11:32 -- scripts/common.sh@335 -- # read -ra ver1 00:11:36.317 22:11:32 -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.317 22:11:32 -- scripts/common.sh@336 -- # read -ra ver2 00:11:36.317 22:11:32 -- scripts/common.sh@337 -- # local 'op=<' 00:11:36.317 22:11:32 -- scripts/common.sh@339 -- # ver1_l=2 00:11:36.317 22:11:32 -- scripts/common.sh@340 -- # ver2_l=1 00:11:36.317 22:11:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:36.317 22:11:32 -- scripts/common.sh@343 -- # case "$op" in 00:11:36.317 22:11:32 -- scripts/common.sh@344 -- # : 1 00:11:36.317 22:11:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:36.317 22:11:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.317 22:11:32 -- scripts/common.sh@364 -- # decimal 1 00:11:36.317 22:11:32 -- scripts/common.sh@352 -- # local d=1 00:11:36.317 22:11:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.317 22:11:32 -- scripts/common.sh@354 -- # echo 1 00:11:36.317 22:11:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:36.317 22:11:32 -- scripts/common.sh@365 -- # decimal 2 00:11:36.317 22:11:32 -- scripts/common.sh@352 -- # local d=2 00:11:36.317 22:11:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.317 22:11:32 -- scripts/common.sh@354 -- # echo 2 00:11:36.317 22:11:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:36.317 22:11:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:36.317 22:11:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:36.317 22:11:32 -- scripts/common.sh@367 -- # return 0 00:11:36.317 22:11:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.317 22:11:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:36.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.317 --rc genhtml_branch_coverage=1 00:11:36.317 --rc genhtml_function_coverage=1 00:11:36.317 --rc genhtml_legend=1 00:11:36.317 --rc geninfo_all_blocks=1 00:11:36.317 --rc geninfo_unexecuted_blocks=1 00:11:36.317 00:11:36.317 ' 00:11:36.317 22:11:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:36.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.317 --rc genhtml_branch_coverage=1 00:11:36.317 --rc genhtml_function_coverage=1 00:11:36.317 --rc genhtml_legend=1 00:11:36.317 --rc geninfo_all_blocks=1 00:11:36.317 --rc geninfo_unexecuted_blocks=1 00:11:36.317 00:11:36.317 ' 00:11:36.317 22:11:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:36.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.317 --rc genhtml_branch_coverage=1 00:11:36.317 --rc genhtml_function_coverage=1 00:11:36.317 --rc genhtml_legend=1 00:11:36.317 --rc geninfo_all_blocks=1 00:11:36.317 --rc geninfo_unexecuted_blocks=1 00:11:36.317 00:11:36.317 ' 00:11:36.317 22:11:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:36.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.317 --rc genhtml_branch_coverage=1 00:11:36.317 --rc genhtml_function_coverage=1 00:11:36.317 --rc genhtml_legend=1 00:11:36.317 --rc geninfo_all_blocks=1 00:11:36.317 --rc geninfo_unexecuted_blocks=1 00:11:36.317 00:11:36.317 ' 00:11:36.317 22:11:32 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:36.317 22:11:32 -- nvmf/common.sh@7 -- # uname -s 00:11:36.317 22:11:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.317 22:11:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.317 22:11:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.317 22:11:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.317 22:11:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.317 22:11:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.317 22:11:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.317 22:11:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.317 22:11:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.317 22:11:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.577 22:11:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:11:36.577 22:11:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:11:36.577 22:11:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.577 22:11:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.577 22:11:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:36.577 22:11:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:36.577 22:11:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.577 22:11:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.577 22:11:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.577 22:11:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.577 22:11:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.577 22:11:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.577 22:11:32 -- paths/export.sh@5 -- # export PATH 00:11:36.577 22:11:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.577 22:11:32 -- nvmf/common.sh@46 -- # : 0 00:11:36.577 22:11:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:36.577 22:11:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:36.577 22:11:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:36.577 22:11:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.577 22:11:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.577 22:11:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:36.577 22:11:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:36.577 22:11:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:36.577 22:11:32 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:36.577 22:11:32 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:36.577 22:11:32 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:36.577 22:11:32 -- target/invalid.sh@14 -- # target=foobar 00:11:36.577 22:11:32 -- target/invalid.sh@16 -- # RANDOM=0 00:11:36.577 22:11:32 -- target/invalid.sh@34 -- # nvmftestinit 00:11:36.577 22:11:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:36.577 22:11:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.577 22:11:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:36.577 22:11:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:36.577 22:11:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:36.577 22:11:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.577 22:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.577 22:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.577 22:11:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:36.577 22:11:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:36.577 22:11:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:36.577 22:11:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:36.577 22:11:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:36.577 22:11:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:36.577 22:11:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.577 22:11:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.577 22:11:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:36.577 22:11:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:36.577 22:11:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:36.577 22:11:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:36.577 22:11:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:36.577 22:11:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.577 22:11:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:36.577 22:11:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:36.577 22:11:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:36.577 22:11:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:36.577 22:11:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:36.577 22:11:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:36.577 Cannot find device "nvmf_tgt_br" 00:11:36.577 22:11:32 -- nvmf/common.sh@154 -- # true 00:11:36.577 22:11:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:36.577 Cannot find device "nvmf_tgt_br2" 00:11:36.577 22:11:32 -- nvmf/common.sh@155 -- # true 00:11:36.577 22:11:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:36.577 22:11:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:36.577 Cannot find device "nvmf_tgt_br" 00:11:36.577 22:11:33 -- nvmf/common.sh@157 -- # true 00:11:36.577 22:11:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:36.577 Cannot find device "nvmf_tgt_br2" 00:11:36.577 22:11:33 -- nvmf/common.sh@158 -- # true 00:11:36.577 22:11:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:36.577 22:11:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:36.577 22:11:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:36.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:36.577 22:11:33 -- nvmf/common.sh@161 -- # true 00:11:36.577 22:11:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:36.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:36.577 22:11:33 -- nvmf/common.sh@162 -- # true 00:11:36.577 22:11:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:36.577 22:11:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:36.577 22:11:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:36.577 22:11:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:36.577 22:11:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:36.577 22:11:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:36.836 22:11:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:36.836 22:11:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:36.836 22:11:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:36.836 22:11:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:36.836 22:11:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:36.836 22:11:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:36.836 22:11:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:36.836 22:11:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:36.836 22:11:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:36.836 22:11:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:36.836 22:11:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:36.836 22:11:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:36.836 22:11:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:36.836 22:11:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:36.836 22:11:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:36.836 22:11:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:36.836 22:11:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:36.836 22:11:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:36.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:11:36.836 00:11:36.836 --- 10.0.0.2 ping statistics --- 00:11:36.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.836 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:36.836 22:11:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:36.836 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:36.836 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:11:36.836 00:11:36.836 --- 10.0.0.3 ping statistics --- 00:11:36.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.836 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:36.836 22:11:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:36.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:36.836 00:11:36.836 --- 10.0.0.1 ping statistics --- 00:11:36.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.836 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:36.836 22:11:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.836 22:11:33 -- nvmf/common.sh@421 -- # return 0 00:11:36.836 22:11:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:36.836 22:11:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.836 22:11:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:36.836 22:11:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:36.836 22:11:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.836 22:11:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:36.836 22:11:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:36.836 22:11:33 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:36.836 22:11:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:36.836 22:11:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:36.836 22:11:33 -- common/autotest_common.sh@10 -- # set +x 00:11:36.836 22:11:33 -- nvmf/common.sh@469 -- # nvmfpid=66626 00:11:36.836 22:11:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.836 22:11:33 -- nvmf/common.sh@470 -- # waitforlisten 66626 00:11:36.836 22:11:33 -- common/autotest_common.sh@829 -- # '[' -z 66626 ']' 00:11:36.836 22:11:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.836 22:11:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.836 22:11:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.836 22:11:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.837 22:11:33 -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 [2024-11-17 22:11:33.397252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:36.837 [2024-11-17 22:11:33.397326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.095 [2024-11-17 22:11:33.531237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.095 [2024-11-17 22:11:33.616446] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:37.095 [2024-11-17 22:11:33.616603] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.095 [2024-11-17 22:11:33.616615] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.095 [2024-11-17 22:11:33.616623] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.095 [2024-11-17 22:11:33.616789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.095 [2024-11-17 22:11:33.616944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.095 [2024-11-17 22:11:33.617067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.095 [2024-11-17 22:11:33.617070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.031 22:11:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.031 22:11:34 -- common/autotest_common.sh@862 -- # return 0 00:11:38.031 22:11:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:38.031 22:11:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.031 22:11:34 -- common/autotest_common.sh@10 -- # set +x 00:11:38.031 22:11:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.031 22:11:34 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:38.031 22:11:34 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30727 00:11:38.290 [2024-11-17 22:11:34.683232] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:38.290 22:11:34 -- target/invalid.sh@40 -- # out='2024/11/17 22:11:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30727 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:38.290 request: 00:11:38.290 { 00:11:38.290 "method": "nvmf_create_subsystem", 00:11:38.290 "params": { 00:11:38.290 "nqn": "nqn.2016-06.io.spdk:cnode30727", 00:11:38.290 "tgt_name": "foobar" 00:11:38.290 } 00:11:38.290 } 00:11:38.290 Got JSON-RPC error response 00:11:38.290 GoRPCClient: error on JSON-RPC call' 00:11:38.290 22:11:34 -- target/invalid.sh@41 -- # [[ 2024/11/17 22:11:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30727 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:38.290 request: 00:11:38.290 { 00:11:38.290 "method": "nvmf_create_subsystem", 00:11:38.290 "params": { 00:11:38.290 "nqn": "nqn.2016-06.io.spdk:cnode30727", 00:11:38.290 "tgt_name": "foobar" 00:11:38.290 } 00:11:38.290 } 00:11:38.290 Got JSON-RPC error response 00:11:38.290 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:38.290 22:11:34 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:38.290 22:11:34 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3748 00:11:38.548 [2024-11-17 22:11:34.987593] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3748: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:38.548 22:11:35 -- target/invalid.sh@45 -- # out='2024/11/17 22:11:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3748 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:38.548 request: 00:11:38.548 { 00:11:38.548 "method": "nvmf_create_subsystem", 00:11:38.548 "params": { 00:11:38.548 "nqn": "nqn.2016-06.io.spdk:cnode3748", 00:11:38.548 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:38.548 } 00:11:38.548 } 00:11:38.548 Got JSON-RPC error response 00:11:38.548 GoRPCClient: error on JSON-RPC call' 00:11:38.548 22:11:35 -- target/invalid.sh@46 -- # [[ 2024/11/17 22:11:34 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode3748 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:38.548 request: 00:11:38.548 { 00:11:38.548 "method": "nvmf_create_subsystem", 00:11:38.548 "params": { 00:11:38.548 "nqn": "nqn.2016-06.io.spdk:cnode3748", 00:11:38.548 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:38.548 } 00:11:38.548 } 00:11:38.548 Got JSON-RPC error response 00:11:38.548 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:38.548 22:11:35 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:38.548 22:11:35 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16654 00:11:38.807 [2024-11-17 22:11:35.227850] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16654: invalid model number 'SPDK_Controller' 00:11:38.807 22:11:35 -- target/invalid.sh@50 -- # out='2024/11/17 22:11:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16654], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:38.807 request: 00:11:38.807 { 00:11:38.807 "method": "nvmf_create_subsystem", 00:11:38.807 "params": { 00:11:38.807 "nqn": "nqn.2016-06.io.spdk:cnode16654", 00:11:38.807 "model_number": "SPDK_Controller\u001f" 00:11:38.807 } 00:11:38.807 } 00:11:38.807 Got JSON-RPC error response 00:11:38.807 GoRPCClient: error on JSON-RPC call' 00:11:38.807 22:11:35 -- target/invalid.sh@51 -- # [[ 2024/11/17 22:11:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16654], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:38.807 request: 00:11:38.807 { 00:11:38.807 "method": "nvmf_create_subsystem", 00:11:38.807 "params": { 00:11:38.807 "nqn": "nqn.2016-06.io.spdk:cnode16654", 00:11:38.807 "model_number": "SPDK_Controller\u001f" 00:11:38.807 } 00:11:38.807 } 00:11:38.807 Got JSON-RPC error response 00:11:38.807 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:38.807 22:11:35 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:38.808 22:11:35 -- target/invalid.sh@19 -- # local length=21 ll 00:11:38.808 22:11:35 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:38.808 22:11:35 -- target/invalid.sh@21 -- # local chars 00:11:38.808 22:11:35 -- target/invalid.sh@22 -- # local string 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 36 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+='$' 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 102 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=f 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 51 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=3 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 43 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=+ 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 127 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=$'\177' 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 86 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=V 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 82 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=R 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 121 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=y 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 123 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+='{' 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 119 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=w 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 74 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=J 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 38 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+='&' 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 48 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=0 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 105 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=i 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 52 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=4 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 63 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+='?' 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 39 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=\' 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 62 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+='>' 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 69 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=E 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 116 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=t 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # printf %x 97 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:38.808 22:11:35 -- target/invalid.sh@25 -- # string+=a 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:38.808 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:38.808 22:11:35 -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:11:38.808 22:11:35 -- target/invalid.sh@31 -- # echo '$f3+VRy{wJ&0i4?'\''>Eta' 00:11:38.808 22:11:35 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '$f3+VRy{wJ&0i4?'\''>Eta' nqn.2016-06.io.spdk:cnode23016 00:11:39.066 [2024-11-17 22:11:35.660447] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23016: invalid serial number '$f3+VRy{wJ&0i4?'>Eta' 00:11:39.325 22:11:35 -- target/invalid.sh@54 -- # out='2024/11/17 22:11:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23016 serial_number:$f3+VRy{wJ&0i4?'\''>Eta], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN $f3+VRy{wJ&0i4?'\''>Eta 00:11:39.325 request: 00:11:39.325 { 00:11:39.325 "method": "nvmf_create_subsystem", 00:11:39.325 "params": { 00:11:39.325 "nqn": "nqn.2016-06.io.spdk:cnode23016", 00:11:39.325 "serial_number": "$f3+\u007fVRy{wJ&0i4?'\''>Eta" 00:11:39.325 } 00:11:39.325 } 00:11:39.325 Got JSON-RPC error response 00:11:39.325 GoRPCClient: error on JSON-RPC call' 00:11:39.325 22:11:35 -- target/invalid.sh@55 -- # [[ 2024/11/17 22:11:35 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23016 serial_number:$f3+VRy{wJ&0i4?'>Eta], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN $f3+VRy{wJ&0i4?'>Eta 00:11:39.325 request: 00:11:39.325 { 00:11:39.325 "method": "nvmf_create_subsystem", 00:11:39.325 "params": { 00:11:39.325 "nqn": "nqn.2016-06.io.spdk:cnode23016", 00:11:39.325 "serial_number": "$f3+\u007fVRy{wJ&0i4?'>Eta" 00:11:39.325 } 00:11:39.325 } 00:11:39.325 Got JSON-RPC error response 00:11:39.325 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:39.325 22:11:35 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:39.325 22:11:35 -- target/invalid.sh@19 -- # local length=41 ll 00:11:39.325 22:11:35 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:39.325 22:11:35 -- target/invalid.sh@21 -- # local chars 00:11:39.325 22:11:35 -- target/invalid.sh@22 -- # local string 00:11:39.325 22:11:35 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:39.325 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # printf %x 48 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # string+=0 00:11:39.325 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.325 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # printf %x 106 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # string+=j 00:11:39.325 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.325 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # printf %x 40 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # string+='(' 00:11:39.325 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.325 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # printf %x 94 00:11:39.325 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+='^' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 44 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=, 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 62 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+='>' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 97 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=a 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 91 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+='[' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 98 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=b 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 59 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=';' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 107 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=k 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 34 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+='"' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 116 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=t 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 84 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=T 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 33 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+='!' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 99 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=c 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 97 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=a 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 95 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=_ 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 96 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+='`' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 100 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=d 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 101 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=e 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 74 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=J 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 89 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=Y 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 112 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=p 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 119 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=w 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 61 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+== 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 97 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=a 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 86 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=V 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 85 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=U 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 43 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=+ 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 35 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+='#' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 53 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=5 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 64 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=@ 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 125 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+='}' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 60 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+='<' 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 71 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=G 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # printf %x 117 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:39.326 22:11:35 -- target/invalid.sh@25 -- # string+=u 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.326 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # printf %x 123 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # string+='{' 00:11:39.327 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.327 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # printf %x 57 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # string+=9 00:11:39.327 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.327 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # printf %x 100 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # string+=d 00:11:39.327 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.327 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # printf %x 74 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:39.327 22:11:35 -- target/invalid.sh@25 -- # string+=J 00:11:39.327 22:11:35 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:39.327 22:11:35 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:39.327 22:11:35 -- target/invalid.sh@28 -- # [[ 0 == \- ]] 00:11:39.327 22:11:35 -- target/invalid.sh@31 -- # echo '0j(^,>a[b;k"tT!ca_`deJYpw=aVU+#5@}a[b;k"tT!ca_`deJYpw=aVU+#5@}a[b;k"tT!ca_`deJYpw=aVU+#5@}a[b;k"tT!ca_`deJYpw=aVU+#5@}a[b;k"tT!ca_`deJYpw=aVU+#5@}a[b;k\"tT!ca_`deJYpw=aVU+#5@} /dev/null' 00:11:42.839 22:11:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.839 22:11:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:42.839 00:11:42.839 real 0m6.574s 00:11:42.839 user 0m26.342s 00:11:42.839 sys 0m1.417s 00:11:42.839 22:11:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:42.839 22:11:39 -- common/autotest_common.sh@10 -- # set +x 00:11:42.839 ************************************ 00:11:42.839 END TEST nvmf_invalid 00:11:42.839 ************************************ 00:11:42.839 22:11:39 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:42.839 22:11:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:42.839 22:11:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.839 22:11:39 -- common/autotest_common.sh@10 -- # set +x 00:11:42.839 ************************************ 00:11:42.839 START TEST nvmf_abort 00:11:42.839 ************************************ 00:11:42.839 22:11:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:43.098 * Looking for test storage... 00:11:43.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:43.098 22:11:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:43.098 22:11:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:43.098 22:11:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:43.098 22:11:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:43.098 22:11:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:43.098 22:11:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:43.098 22:11:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:43.098 22:11:39 -- scripts/common.sh@335 -- # IFS=.-: 00:11:43.098 22:11:39 -- scripts/common.sh@335 -- # read -ra ver1 00:11:43.098 22:11:39 -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.098 22:11:39 -- scripts/common.sh@336 -- # read -ra ver2 00:11:43.098 22:11:39 -- scripts/common.sh@337 -- # local 'op=<' 00:11:43.098 22:11:39 -- scripts/common.sh@339 -- # ver1_l=2 00:11:43.098 22:11:39 -- scripts/common.sh@340 -- # ver2_l=1 00:11:43.098 22:11:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:43.098 22:11:39 -- scripts/common.sh@343 -- # case "$op" in 00:11:43.098 22:11:39 -- scripts/common.sh@344 -- # : 1 00:11:43.098 22:11:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:43.098 22:11:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.098 22:11:39 -- scripts/common.sh@364 -- # decimal 1 00:11:43.098 22:11:39 -- scripts/common.sh@352 -- # local d=1 00:11:43.098 22:11:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.098 22:11:39 -- scripts/common.sh@354 -- # echo 1 00:11:43.098 22:11:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:43.098 22:11:39 -- scripts/common.sh@365 -- # decimal 2 00:11:43.098 22:11:39 -- scripts/common.sh@352 -- # local d=2 00:11:43.098 22:11:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.098 22:11:39 -- scripts/common.sh@354 -- # echo 2 00:11:43.098 22:11:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:43.098 22:11:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:43.098 22:11:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:43.098 22:11:39 -- scripts/common.sh@367 -- # return 0 00:11:43.098 22:11:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.098 22:11:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:43.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.098 --rc genhtml_branch_coverage=1 00:11:43.098 --rc genhtml_function_coverage=1 00:11:43.098 --rc genhtml_legend=1 00:11:43.098 --rc geninfo_all_blocks=1 00:11:43.098 --rc geninfo_unexecuted_blocks=1 00:11:43.098 00:11:43.098 ' 00:11:43.098 22:11:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:43.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.098 --rc genhtml_branch_coverage=1 00:11:43.098 --rc genhtml_function_coverage=1 00:11:43.098 --rc genhtml_legend=1 00:11:43.098 --rc geninfo_all_blocks=1 00:11:43.098 --rc geninfo_unexecuted_blocks=1 00:11:43.098 00:11:43.098 ' 00:11:43.098 22:11:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:43.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.098 --rc genhtml_branch_coverage=1 00:11:43.098 --rc genhtml_function_coverage=1 00:11:43.098 --rc genhtml_legend=1 00:11:43.098 --rc geninfo_all_blocks=1 00:11:43.098 --rc geninfo_unexecuted_blocks=1 00:11:43.098 00:11:43.098 ' 00:11:43.098 22:11:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:43.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.098 --rc genhtml_branch_coverage=1 00:11:43.098 --rc genhtml_function_coverage=1 00:11:43.098 --rc genhtml_legend=1 00:11:43.098 --rc geninfo_all_blocks=1 00:11:43.098 --rc geninfo_unexecuted_blocks=1 00:11:43.098 00:11:43.098 ' 00:11:43.098 22:11:39 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:43.098 22:11:39 -- nvmf/common.sh@7 -- # uname -s 00:11:43.098 22:11:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.098 22:11:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.098 22:11:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.098 22:11:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.098 22:11:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.098 22:11:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.098 22:11:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.098 22:11:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.098 22:11:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.099 22:11:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.099 22:11:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:11:43.099 22:11:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:11:43.099 22:11:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.099 22:11:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.099 22:11:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:43.099 22:11:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.099 22:11:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.099 22:11:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.099 22:11:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.099 22:11:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.099 22:11:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.099 22:11:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.099 22:11:39 -- paths/export.sh@5 -- # export PATH 00:11:43.099 22:11:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.099 22:11:39 -- nvmf/common.sh@46 -- # : 0 00:11:43.099 22:11:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:43.099 22:11:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:43.099 22:11:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:43.099 22:11:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.099 22:11:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.099 22:11:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:43.099 22:11:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:43.099 22:11:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:43.099 22:11:39 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:43.099 22:11:39 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:43.099 22:11:39 -- target/abort.sh@14 -- # nvmftestinit 00:11:43.099 22:11:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:43.099 22:11:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.099 22:11:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:43.099 22:11:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:43.099 22:11:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:43.099 22:11:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.099 22:11:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.099 22:11:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.099 22:11:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:43.099 22:11:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:43.099 22:11:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:43.099 22:11:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:43.099 22:11:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:43.099 22:11:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:43.099 22:11:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.099 22:11:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.099 22:11:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:43.099 22:11:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:43.099 22:11:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:43.099 22:11:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:43.099 22:11:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:43.099 22:11:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.099 22:11:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:43.099 22:11:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:43.099 22:11:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:43.099 22:11:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:43.099 22:11:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:43.099 22:11:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:43.099 Cannot find device "nvmf_tgt_br" 00:11:43.099 22:11:39 -- nvmf/common.sh@154 -- # true 00:11:43.099 22:11:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.099 Cannot find device "nvmf_tgt_br2" 00:11:43.099 22:11:39 -- nvmf/common.sh@155 -- # true 00:11:43.099 22:11:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:43.099 22:11:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:43.099 Cannot find device "nvmf_tgt_br" 00:11:43.099 22:11:39 -- nvmf/common.sh@157 -- # true 00:11:43.099 22:11:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:43.099 Cannot find device "nvmf_tgt_br2" 00:11:43.099 22:11:39 -- nvmf/common.sh@158 -- # true 00:11:43.099 22:11:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:43.358 22:11:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:43.358 22:11:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.358 22:11:39 -- nvmf/common.sh@161 -- # true 00:11:43.358 22:11:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.358 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.358 22:11:39 -- nvmf/common.sh@162 -- # true 00:11:43.358 22:11:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.358 22:11:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.358 22:11:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.358 22:11:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.358 22:11:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.358 22:11:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.358 22:11:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.358 22:11:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:43.358 22:11:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:43.358 22:11:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:43.358 22:11:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:43.358 22:11:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:43.358 22:11:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:43.358 22:11:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.358 22:11:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.358 22:11:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.358 22:11:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:43.358 22:11:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:43.358 22:11:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.358 22:11:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.358 22:11:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.358 22:11:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.358 22:11:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.358 22:11:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:43.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:43.358 00:11:43.358 --- 10.0.0.2 ping statistics --- 00:11:43.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.359 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:43.359 22:11:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:43.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:11:43.359 00:11:43.359 --- 10.0.0.3 ping statistics --- 00:11:43.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.359 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:43.359 22:11:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:43.359 00:11:43.359 --- 10.0.0.1 ping statistics --- 00:11:43.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.359 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:43.359 22:11:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.359 22:11:39 -- nvmf/common.sh@421 -- # return 0 00:11:43.359 22:11:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:43.359 22:11:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.359 22:11:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:43.359 22:11:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:43.359 22:11:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.359 22:11:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:43.359 22:11:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:43.359 22:11:39 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:43.359 22:11:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:43.359 22:11:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.359 22:11:39 -- common/autotest_common.sh@10 -- # set +x 00:11:43.618 22:11:39 -- nvmf/common.sh@469 -- # nvmfpid=67150 00:11:43.619 22:11:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:43.619 22:11:39 -- nvmf/common.sh@470 -- # waitforlisten 67150 00:11:43.619 22:11:39 -- common/autotest_common.sh@829 -- # '[' -z 67150 ']' 00:11:43.619 22:11:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.619 22:11:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.619 22:11:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.619 22:11:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.619 22:11:39 -- common/autotest_common.sh@10 -- # set +x 00:11:43.619 [2024-11-17 22:11:40.039266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:43.619 [2024-11-17 22:11:40.039366] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.619 [2024-11-17 22:11:40.174950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.878 [2024-11-17 22:11:40.280020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:43.878 [2024-11-17 22:11:40.280156] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.878 [2024-11-17 22:11:40.280167] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.878 [2024-11-17 22:11:40.280175] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.878 [2024-11-17 22:11:40.280281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.878 [2024-11-17 22:11:40.280619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.878 [2024-11-17 22:11:40.280629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.446 22:11:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.446 22:11:41 -- common/autotest_common.sh@862 -- # return 0 00:11:44.446 22:11:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:44.446 22:11:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.446 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 22:11:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.705 22:11:41 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:44.705 22:11:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 [2024-11-17 22:11:41.070204] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.705 22:11:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 22:11:41 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:44.705 22:11:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 Malloc0 00:11:44.705 22:11:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 22:11:41 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:44.705 22:11:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 Delay0 00:11:44.705 22:11:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 22:11:41 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:44.705 22:11:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 22:11:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 22:11:41 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:44.705 22:11:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 22:11:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 22:11:41 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:44.705 22:11:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 [2024-11-17 22:11:41.155903] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.705 22:11:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 22:11:41 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:44.705 22:11:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 22:11:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 22:11:41 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:44.964 [2024-11-17 22:11:41.336029] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:46.871 Initializing NVMe Controllers 00:11:46.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:46.871 controller IO queue size 128 less than required 00:11:46.871 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:46.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:46.871 Initialization complete. Launching workers. 00:11:46.871 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35846 00:11:46.871 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35907, failed to submit 62 00:11:46.871 success 35846, unsuccess 61, failed 0 00:11:46.871 22:11:43 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:46.871 22:11:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.871 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:11:46.871 22:11:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.871 22:11:43 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:46.871 22:11:43 -- target/abort.sh@38 -- # nvmftestfini 00:11:46.871 22:11:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:46.871 22:11:43 -- nvmf/common.sh@116 -- # sync 00:11:46.871 22:11:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:46.871 22:11:43 -- nvmf/common.sh@119 -- # set +e 00:11:46.871 22:11:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:46.871 22:11:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:46.871 rmmod nvme_tcp 00:11:46.871 rmmod nvme_fabrics 00:11:47.131 rmmod nvme_keyring 00:11:47.131 22:11:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:47.131 22:11:43 -- nvmf/common.sh@123 -- # set -e 00:11:47.131 22:11:43 -- nvmf/common.sh@124 -- # return 0 00:11:47.131 22:11:43 -- nvmf/common.sh@477 -- # '[' -n 67150 ']' 00:11:47.131 22:11:43 -- nvmf/common.sh@478 -- # killprocess 67150 00:11:47.131 22:11:43 -- common/autotest_common.sh@936 -- # '[' -z 67150 ']' 00:11:47.131 22:11:43 -- common/autotest_common.sh@940 -- # kill -0 67150 00:11:47.131 22:11:43 -- common/autotest_common.sh@941 -- # uname 00:11:47.131 22:11:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:47.131 22:11:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67150 00:11:47.131 killing process with pid 67150 00:11:47.131 22:11:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:47.131 22:11:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:47.131 22:11:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67150' 00:11:47.131 22:11:43 -- common/autotest_common.sh@955 -- # kill 67150 00:11:47.131 22:11:43 -- common/autotest_common.sh@960 -- # wait 67150 00:11:47.389 22:11:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:47.390 22:11:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:47.390 22:11:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:47.390 22:11:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.390 22:11:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:47.390 22:11:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.390 22:11:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.390 22:11:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.390 22:11:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:47.390 00:11:47.390 real 0m4.553s 00:11:47.390 user 0m12.830s 00:11:47.390 sys 0m1.033s 00:11:47.390 22:11:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:47.390 ************************************ 00:11:47.390 END TEST nvmf_abort 00:11:47.390 ************************************ 00:11:47.390 22:11:43 -- common/autotest_common.sh@10 -- # set +x 00:11:47.390 22:11:43 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:47.390 22:11:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:47.390 22:11:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:47.390 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:11:47.649 ************************************ 00:11:47.649 START TEST nvmf_ns_hotplug_stress 00:11:47.649 ************************************ 00:11:47.649 22:11:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:47.649 * Looking for test storage... 00:11:47.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:47.649 22:11:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:47.649 22:11:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:47.649 22:11:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:47.649 22:11:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:47.649 22:11:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:47.649 22:11:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:47.649 22:11:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:47.649 22:11:44 -- scripts/common.sh@335 -- # IFS=.-: 00:11:47.649 22:11:44 -- scripts/common.sh@335 -- # read -ra ver1 00:11:47.649 22:11:44 -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.649 22:11:44 -- scripts/common.sh@336 -- # read -ra ver2 00:11:47.649 22:11:44 -- scripts/common.sh@337 -- # local 'op=<' 00:11:47.649 22:11:44 -- scripts/common.sh@339 -- # ver1_l=2 00:11:47.649 22:11:44 -- scripts/common.sh@340 -- # ver2_l=1 00:11:47.649 22:11:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:47.649 22:11:44 -- scripts/common.sh@343 -- # case "$op" in 00:11:47.649 22:11:44 -- scripts/common.sh@344 -- # : 1 00:11:47.649 22:11:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:47.649 22:11:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.649 22:11:44 -- scripts/common.sh@364 -- # decimal 1 00:11:47.649 22:11:44 -- scripts/common.sh@352 -- # local d=1 00:11:47.649 22:11:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.649 22:11:44 -- scripts/common.sh@354 -- # echo 1 00:11:47.649 22:11:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:47.650 22:11:44 -- scripts/common.sh@365 -- # decimal 2 00:11:47.650 22:11:44 -- scripts/common.sh@352 -- # local d=2 00:11:47.650 22:11:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.650 22:11:44 -- scripts/common.sh@354 -- # echo 2 00:11:47.650 22:11:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:47.650 22:11:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:47.650 22:11:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:47.650 22:11:44 -- scripts/common.sh@367 -- # return 0 00:11:47.650 22:11:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.650 22:11:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:47.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.650 --rc genhtml_branch_coverage=1 00:11:47.650 --rc genhtml_function_coverage=1 00:11:47.650 --rc genhtml_legend=1 00:11:47.650 --rc geninfo_all_blocks=1 00:11:47.650 --rc geninfo_unexecuted_blocks=1 00:11:47.650 00:11:47.650 ' 00:11:47.650 22:11:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:47.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.650 --rc genhtml_branch_coverage=1 00:11:47.650 --rc genhtml_function_coverage=1 00:11:47.650 --rc genhtml_legend=1 00:11:47.650 --rc geninfo_all_blocks=1 00:11:47.650 --rc geninfo_unexecuted_blocks=1 00:11:47.650 00:11:47.650 ' 00:11:47.650 22:11:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:47.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.650 --rc genhtml_branch_coverage=1 00:11:47.650 --rc genhtml_function_coverage=1 00:11:47.650 --rc genhtml_legend=1 00:11:47.650 --rc geninfo_all_blocks=1 00:11:47.650 --rc geninfo_unexecuted_blocks=1 00:11:47.650 00:11:47.650 ' 00:11:47.650 22:11:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:47.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.650 --rc genhtml_branch_coverage=1 00:11:47.650 --rc genhtml_function_coverage=1 00:11:47.650 --rc genhtml_legend=1 00:11:47.650 --rc geninfo_all_blocks=1 00:11:47.650 --rc geninfo_unexecuted_blocks=1 00:11:47.650 00:11:47.650 ' 00:11:47.650 22:11:44 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.650 22:11:44 -- nvmf/common.sh@7 -- # uname -s 00:11:47.650 22:11:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.650 22:11:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.650 22:11:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.650 22:11:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.650 22:11:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.650 22:11:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.650 22:11:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.650 22:11:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.650 22:11:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.650 22:11:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.650 22:11:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:11:47.650 22:11:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:11:47.650 22:11:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.650 22:11:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.650 22:11:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.650 22:11:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.650 22:11:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.650 22:11:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.650 22:11:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.650 22:11:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.650 22:11:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.650 22:11:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.650 22:11:44 -- paths/export.sh@5 -- # export PATH 00:11:47.650 22:11:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.650 22:11:44 -- nvmf/common.sh@46 -- # : 0 00:11:47.650 22:11:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:47.650 22:11:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:47.650 22:11:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:47.650 22:11:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.650 22:11:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.650 22:11:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:47.650 22:11:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:47.650 22:11:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:47.650 22:11:44 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:47.650 22:11:44 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:47.650 22:11:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:47.650 22:11:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.650 22:11:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:47.650 22:11:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:47.650 22:11:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:47.650 22:11:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.650 22:11:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.650 22:11:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.650 22:11:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:47.650 22:11:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:47.650 22:11:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:47.650 22:11:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:47.650 22:11:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:47.650 22:11:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:47.650 22:11:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.650 22:11:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.650 22:11:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:47.650 22:11:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:47.650 22:11:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:47.650 22:11:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:47.650 22:11:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:47.650 22:11:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.650 22:11:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:47.650 22:11:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:47.650 22:11:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:47.650 22:11:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:47.650 22:11:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:47.650 22:11:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:47.650 Cannot find device "nvmf_tgt_br" 00:11:47.650 22:11:44 -- nvmf/common.sh@154 -- # true 00:11:47.650 22:11:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.650 Cannot find device "nvmf_tgt_br2" 00:11:47.650 22:11:44 -- nvmf/common.sh@155 -- # true 00:11:47.650 22:11:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:47.909 22:11:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:47.909 Cannot find device "nvmf_tgt_br" 00:11:47.909 22:11:44 -- nvmf/common.sh@157 -- # true 00:11:47.909 22:11:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:47.909 Cannot find device "nvmf_tgt_br2" 00:11:47.909 22:11:44 -- nvmf/common.sh@158 -- # true 00:11:47.909 22:11:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:47.909 22:11:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:47.909 22:11:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.909 22:11:44 -- nvmf/common.sh@161 -- # true 00:11:47.909 22:11:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.909 22:11:44 -- nvmf/common.sh@162 -- # true 00:11:47.909 22:11:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:47.909 22:11:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:47.909 22:11:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:47.909 22:11:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:47.909 22:11:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.909 22:11:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.909 22:11:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.909 22:11:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:47.909 22:11:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:47.909 22:11:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:47.909 22:11:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:47.909 22:11:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:47.909 22:11:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:47.909 22:11:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.909 22:11:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.909 22:11:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.909 22:11:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:47.909 22:11:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:47.909 22:11:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.909 22:11:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.909 22:11:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.910 22:11:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:48.169 22:11:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:48.169 22:11:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:48.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:11:48.169 00:11:48.169 --- 10.0.0.2 ping statistics --- 00:11:48.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.169 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:48.169 22:11:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:48.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:48.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:48.169 00:11:48.169 --- 10.0.0.3 ping statistics --- 00:11:48.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.169 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:48.169 22:11:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:48.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:48.169 00:11:48.169 --- 10.0.0.1 ping statistics --- 00:11:48.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.169 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:48.169 22:11:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.169 22:11:44 -- nvmf/common.sh@421 -- # return 0 00:11:48.169 22:11:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:48.169 22:11:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.169 22:11:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:48.169 22:11:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:48.169 22:11:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.169 22:11:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:48.169 22:11:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:48.169 22:11:44 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:48.169 22:11:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:48.169 22:11:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:48.169 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:11:48.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.169 22:11:44 -- nvmf/common.sh@469 -- # nvmfpid=67416 00:11:48.169 22:11:44 -- nvmf/common.sh@470 -- # waitforlisten 67416 00:11:48.169 22:11:44 -- common/autotest_common.sh@829 -- # '[' -z 67416 ']' 00:11:48.169 22:11:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:48.169 22:11:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.169 22:11:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.169 22:11:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.169 22:11:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.169 22:11:44 -- common/autotest_common.sh@10 -- # set +x 00:11:48.169 [2024-11-17 22:11:44.629170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:48.169 [2024-11-17 22:11:44.629567] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.169 [2024-11-17 22:11:44.775680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:48.428 [2024-11-17 22:11:44.908253] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:48.428 [2024-11-17 22:11:44.908631] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.428 [2024-11-17 22:11:44.908794] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.428 [2024-11-17 22:11:44.908982] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.428 [2024-11-17 22:11:44.909323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.428 [2024-11-17 22:11:44.909477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.428 [2024-11-17 22:11:44.909484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.996 22:11:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.996 22:11:45 -- common/autotest_common.sh@862 -- # return 0 00:11:48.996 22:11:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:48.996 22:11:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.996 22:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:49.255 22:11:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.255 22:11:45 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:49.255 22:11:45 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:49.514 [2024-11-17 22:11:45.914255] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.514 22:11:45 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:49.773 22:11:46 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.032 [2024-11-17 22:11:46.443239] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.032 22:11:46 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:50.290 22:11:46 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:50.549 Malloc0 00:11:50.549 22:11:46 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:50.549 Delay0 00:11:50.807 22:11:47 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.066 22:11:47 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:51.066 NULL1 00:11:51.325 22:11:47 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:51.325 22:11:47 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:51.325 22:11:47 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67547 00:11:51.325 22:11:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:11:51.325 22:11:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.701 Read completed with error (sct=0, sc=11) 00:11:52.701 22:11:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:52.960 22:11:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:52.960 22:11:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:53.218 true 00:11:53.218 22:11:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:11:53.218 22:11:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.155 22:11:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.155 22:11:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:54.155 22:11:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:54.415 true 00:11:54.415 22:11:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:11:54.415 22:11:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.675 22:11:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.933 22:11:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:54.933 22:11:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:55.192 true 00:11:55.192 22:11:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:11:55.192 22:11:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.129 22:11:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.129 22:11:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:56.129 22:11:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:56.388 true 00:11:56.388 22:11:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:11:56.388 22:11:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.647 22:11:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.906 22:11:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:56.906 22:11:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:57.165 true 00:11:57.165 22:11:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:11:57.165 22:11:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.423 22:11:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.682 22:11:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:57.682 22:11:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:57.941 true 00:11:57.941 22:11:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:11:57.941 22:11:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.876 22:11:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.135 22:11:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:59.135 22:11:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:59.393 true 00:11:59.393 22:11:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:11:59.393 22:11:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.652 22:11:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.911 22:11:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:59.911 22:11:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:00.170 true 00:12:00.170 22:11:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:00.170 22:11:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.428 22:11:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.687 22:11:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:00.687 22:11:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:00.946 true 00:12:00.946 22:11:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:00.946 22:11:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.882 22:11:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.141 22:11:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:02.141 22:11:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:02.400 true 00:12:02.400 22:11:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:02.400 22:11:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.659 22:11:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.917 22:11:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:02.917 22:11:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:03.176 true 00:12:03.176 22:11:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:03.176 22:11:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.435 22:11:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.694 22:12:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:03.694 22:12:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:03.953 true 00:12:03.953 22:12:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:03.953 22:12:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.886 22:12:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.451 22:12:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:05.451 22:12:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:05.709 true 00:12:05.709 22:12:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:05.709 22:12:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.136 22:12:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.402 22:12:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:07.402 22:12:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:07.659 true 00:12:07.659 22:12:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:07.659 22:12:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.591 22:12:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.591 22:12:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:08.591 22:12:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:08.849 true 00:12:08.849 22:12:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:08.849 22:12:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.107 22:12:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.366 22:12:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:09.366 22:12:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:09.625 true 00:12:09.625 22:12:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:09.625 22:12:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.883 22:12:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.142 22:12:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:10.142 22:12:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:10.401 true 00:12:10.401 22:12:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:10.401 22:12:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.338 22:12:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.596 22:12:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:11.597 22:12:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:11.855 true 00:12:12.113 22:12:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:12.113 22:12:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.677 22:12:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.934 22:12:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:12.934 22:12:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:13.192 true 00:12:13.192 22:12:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:13.192 22:12:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.449 22:12:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.707 22:12:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:13.707 22:12:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:13.965 true 00:12:13.965 22:12:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:13.965 22:12:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.223 22:12:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.481 22:12:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:14.481 22:12:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:14.740 true 00:12:14.740 22:12:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:14.740 22:12:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.675 22:12:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.933 22:12:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:15.933 22:12:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:16.191 true 00:12:16.191 22:12:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:16.191 22:12:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.128 22:12:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.387 22:12:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:17.387 22:12:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:17.646 true 00:12:17.646 22:12:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:17.646 22:12:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.904 22:12:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.163 22:12:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:18.163 22:12:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:18.422 true 00:12:18.422 22:12:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:18.422 22:12:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.681 22:12:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.939 22:12:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:18.939 22:12:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:19.198 true 00:12:19.198 22:12:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:19.198 22:12:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.131 22:12:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.390 22:12:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:20.390 22:12:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:20.649 true 00:12:20.649 22:12:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:20.649 22:12:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.923 22:12:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.194 22:12:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:21.194 22:12:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:21.452 true 00:12:21.452 22:12:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:21.452 22:12:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.711 Initializing NVMe Controllers 00:12:21.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:21.711 Controller IO queue size 128, less than required. 00:12:21.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:21.711 Controller IO queue size 128, less than required. 00:12:21.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:21.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:21.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:21.711 Initialization complete. Launching workers. 00:12:21.711 ======================================================== 00:12:21.711 Latency(us) 00:12:21.711 Device Information : IOPS MiB/s Average min max 00:12:21.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 881.95 0.43 72364.71 3012.11 1166251.58 00:12:21.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10313.64 5.04 12410.57 2384.94 586288.27 00:12:21.711 ======================================================== 00:12:21.711 Total : 11195.59 5.47 17133.57 2384.94 1166251.58 00:12:21.711 00:12:21.711 22:12:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.969 22:12:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:21.969 22:12:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:22.227 true 00:12:22.227 22:12:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67547 00:12:22.227 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67547) - No such process 00:12:22.227 22:12:18 -- target/ns_hotplug_stress.sh@53 -- # wait 67547 00:12:22.227 22:12:18 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.486 22:12:18 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:22.742 22:12:19 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:22.742 22:12:19 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:22.742 22:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:22.742 22:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:22.742 22:12:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:22.999 null0 00:12:22.999 22:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:22.999 22:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:22.999 22:12:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:23.257 null1 00:12:23.257 22:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:23.257 22:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:23.257 22:12:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:23.515 null2 00:12:23.515 22:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:23.515 22:12:19 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:23.515 22:12:19 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:23.773 null3 00:12:23.773 22:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:23.773 22:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:23.773 22:12:20 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:24.032 null4 00:12:24.032 22:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:24.032 22:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:24.032 22:12:20 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:24.032 null5 00:12:24.290 22:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:24.290 22:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:24.290 22:12:20 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:24.290 null6 00:12:24.290 22:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:24.290 22:12:20 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:24.290 22:12:20 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:24.549 null7 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@66 -- # wait 68559 68560 68563 68564 68566 68568 68569 68570 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:24.807 22:12:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:24.808 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:24.808 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.808 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:24.808 22:12:21 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:24.808 22:12:21 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:24.808 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:24.808 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:24.808 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:25.066 22:12:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:25.066 22:12:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.066 22:12:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:25.066 22:12:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:25.066 22:12:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:25.066 22:12:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:25.066 22:12:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:25.066 22:12:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.325 22:12:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:25.584 22:12:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:25.584 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:25.584 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:25.584 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.584 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:25.584 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:25.843 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:26.101 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.359 22:12:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:26.616 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:26.874 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.874 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:26.874 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:26.874 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.874 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.132 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.392 22:12:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.656 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.914 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.172 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.172 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.172 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.172 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.172 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.172 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.172 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.172 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.430 22:12:24 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:28.430 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.430 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.430 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.430 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.688 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.946 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.946 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.946 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.946 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.946 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:28.946 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.946 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.946 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.946 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.203 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.461 22:12:25 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.462 22:12:25 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:29.462 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.720 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:29.979 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.238 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.496 22:12:26 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:30.496 22:12:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:30.496 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.496 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.496 22:12:27 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.754 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:31.014 22:12:27 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:31.014 22:12:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:31.014 22:12:27 -- nvmf/common.sh@116 -- # sync 00:12:31.014 22:12:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:31.014 22:12:27 -- nvmf/common.sh@119 -- # set +e 00:12:31.014 22:12:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:31.014 22:12:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:31.014 rmmod nvme_tcp 00:12:31.014 rmmod nvme_fabrics 00:12:31.014 rmmod nvme_keyring 00:12:31.273 22:12:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:31.273 22:12:27 -- nvmf/common.sh@123 -- # set -e 00:12:31.273 22:12:27 -- nvmf/common.sh@124 -- # return 0 00:12:31.273 22:12:27 -- nvmf/common.sh@477 -- # '[' -n 67416 ']' 00:12:31.273 22:12:27 -- nvmf/common.sh@478 -- # killprocess 67416 00:12:31.273 22:12:27 -- common/autotest_common.sh@936 -- # '[' -z 67416 ']' 00:12:31.273 22:12:27 -- common/autotest_common.sh@940 -- # kill -0 67416 00:12:31.273 22:12:27 -- common/autotest_common.sh@941 -- # uname 00:12:31.273 22:12:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.273 22:12:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67416 00:12:31.273 killing process with pid 67416 00:12:31.273 22:12:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:31.273 22:12:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:31.273 22:12:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67416' 00:12:31.273 22:12:27 -- common/autotest_common.sh@955 -- # kill 67416 00:12:31.273 22:12:27 -- common/autotest_common.sh@960 -- # wait 67416 00:12:31.533 22:12:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:31.533 22:12:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:31.533 22:12:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:31.533 22:12:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.533 22:12:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:31.533 22:12:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.533 22:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.533 22:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.533 22:12:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:31.533 ************************************ 00:12:31.533 END TEST nvmf_ns_hotplug_stress 00:12:31.533 ************************************ 00:12:31.533 00:12:31.533 real 0m44.051s 00:12:31.533 user 3m34.855s 00:12:31.533 sys 0m13.257s 00:12:31.533 22:12:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.533 22:12:28 -- common/autotest_common.sh@10 -- # set +x 00:12:31.533 22:12:28 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:31.533 22:12:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.533 22:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.533 22:12:28 -- common/autotest_common.sh@10 -- # set +x 00:12:31.533 ************************************ 00:12:31.533 START TEST nvmf_connect_stress 00:12:31.533 ************************************ 00:12:31.533 22:12:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:31.792 * Looking for test storage... 00:12:31.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.792 22:12:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:31.792 22:12:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:31.792 22:12:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:31.792 22:12:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:31.792 22:12:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:31.792 22:12:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:31.792 22:12:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:31.792 22:12:28 -- scripts/common.sh@335 -- # IFS=.-: 00:12:31.792 22:12:28 -- scripts/common.sh@335 -- # read -ra ver1 00:12:31.792 22:12:28 -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.792 22:12:28 -- scripts/common.sh@336 -- # read -ra ver2 00:12:31.792 22:12:28 -- scripts/common.sh@337 -- # local 'op=<' 00:12:31.792 22:12:28 -- scripts/common.sh@339 -- # ver1_l=2 00:12:31.792 22:12:28 -- scripts/common.sh@340 -- # ver2_l=1 00:12:31.792 22:12:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:31.792 22:12:28 -- scripts/common.sh@343 -- # case "$op" in 00:12:31.792 22:12:28 -- scripts/common.sh@344 -- # : 1 00:12:31.792 22:12:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:31.792 22:12:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.792 22:12:28 -- scripts/common.sh@364 -- # decimal 1 00:12:31.792 22:12:28 -- scripts/common.sh@352 -- # local d=1 00:12:31.792 22:12:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.792 22:12:28 -- scripts/common.sh@354 -- # echo 1 00:12:31.792 22:12:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:31.792 22:12:28 -- scripts/common.sh@365 -- # decimal 2 00:12:31.792 22:12:28 -- scripts/common.sh@352 -- # local d=2 00:12:31.792 22:12:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.792 22:12:28 -- scripts/common.sh@354 -- # echo 2 00:12:31.792 22:12:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:31.792 22:12:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:31.792 22:12:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:31.792 22:12:28 -- scripts/common.sh@367 -- # return 0 00:12:31.792 22:12:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.792 22:12:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:31.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.792 --rc genhtml_branch_coverage=1 00:12:31.792 --rc genhtml_function_coverage=1 00:12:31.792 --rc genhtml_legend=1 00:12:31.792 --rc geninfo_all_blocks=1 00:12:31.792 --rc geninfo_unexecuted_blocks=1 00:12:31.792 00:12:31.792 ' 00:12:31.792 22:12:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:31.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.792 --rc genhtml_branch_coverage=1 00:12:31.792 --rc genhtml_function_coverage=1 00:12:31.792 --rc genhtml_legend=1 00:12:31.792 --rc geninfo_all_blocks=1 00:12:31.792 --rc geninfo_unexecuted_blocks=1 00:12:31.792 00:12:31.792 ' 00:12:31.792 22:12:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:31.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.792 --rc genhtml_branch_coverage=1 00:12:31.792 --rc genhtml_function_coverage=1 00:12:31.792 --rc genhtml_legend=1 00:12:31.792 --rc geninfo_all_blocks=1 00:12:31.792 --rc geninfo_unexecuted_blocks=1 00:12:31.792 00:12:31.792 ' 00:12:31.792 22:12:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:31.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.792 --rc genhtml_branch_coverage=1 00:12:31.792 --rc genhtml_function_coverage=1 00:12:31.792 --rc genhtml_legend=1 00:12:31.792 --rc geninfo_all_blocks=1 00:12:31.792 --rc geninfo_unexecuted_blocks=1 00:12:31.792 00:12:31.792 ' 00:12:31.792 22:12:28 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.792 22:12:28 -- nvmf/common.sh@7 -- # uname -s 00:12:31.792 22:12:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.792 22:12:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.792 22:12:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.792 22:12:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.792 22:12:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.792 22:12:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.792 22:12:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.792 22:12:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.792 22:12:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.792 22:12:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.792 22:12:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:12:31.792 22:12:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:12:31.792 22:12:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.792 22:12:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.792 22:12:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.792 22:12:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.792 22:12:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.792 22:12:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.792 22:12:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.793 22:12:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.793 22:12:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.793 22:12:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.793 22:12:28 -- paths/export.sh@5 -- # export PATH 00:12:31.793 22:12:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.793 22:12:28 -- nvmf/common.sh@46 -- # : 0 00:12:31.793 22:12:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:31.793 22:12:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:31.793 22:12:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:31.793 22:12:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.793 22:12:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.793 22:12:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:31.793 22:12:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:31.793 22:12:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:31.793 22:12:28 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:31.793 22:12:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:31.793 22:12:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.793 22:12:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:31.793 22:12:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:31.793 22:12:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:31.793 22:12:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.793 22:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.793 22:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.793 22:12:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:31.793 22:12:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:31.793 22:12:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:31.793 22:12:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:31.793 22:12:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:31.793 22:12:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:31.793 22:12:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.793 22:12:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.793 22:12:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.793 22:12:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:31.793 22:12:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.793 22:12:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.793 22:12:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.793 22:12:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.793 22:12:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.793 22:12:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.793 22:12:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.793 22:12:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.793 22:12:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:31.793 22:12:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:31.793 Cannot find device "nvmf_tgt_br" 00:12:31.793 22:12:28 -- nvmf/common.sh@154 -- # true 00:12:31.793 22:12:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.793 Cannot find device "nvmf_tgt_br2" 00:12:31.793 22:12:28 -- nvmf/common.sh@155 -- # true 00:12:31.793 22:12:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:31.793 22:12:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:31.793 Cannot find device "nvmf_tgt_br" 00:12:31.793 22:12:28 -- nvmf/common.sh@157 -- # true 00:12:31.793 22:12:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:31.793 Cannot find device "nvmf_tgt_br2" 00:12:31.793 22:12:28 -- nvmf/common.sh@158 -- # true 00:12:31.793 22:12:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:32.052 22:12:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:32.052 22:12:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.052 22:12:28 -- nvmf/common.sh@161 -- # true 00:12:32.052 22:12:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.052 22:12:28 -- nvmf/common.sh@162 -- # true 00:12:32.052 22:12:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.052 22:12:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.052 22:12:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.052 22:12:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.052 22:12:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.052 22:12:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.052 22:12:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.052 22:12:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:32.052 22:12:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:32.052 22:12:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:32.052 22:12:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:32.052 22:12:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:32.052 22:12:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:32.052 22:12:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.052 22:12:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.052 22:12:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.052 22:12:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:32.052 22:12:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:32.052 22:12:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.052 22:12:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.052 22:12:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.052 22:12:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.052 22:12:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.311 22:12:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:32.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:12:32.311 00:12:32.311 --- 10.0.0.2 ping statistics --- 00:12:32.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.311 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:32.311 22:12:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:32.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.311 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:32.311 00:12:32.311 --- 10.0.0.3 ping statistics --- 00:12:32.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.311 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:32.311 22:12:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:32.311 00:12:32.311 --- 10.0.0.1 ping statistics --- 00:12:32.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.311 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:32.311 22:12:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.311 22:12:28 -- nvmf/common.sh@421 -- # return 0 00:12:32.311 22:12:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:32.311 22:12:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.311 22:12:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:32.311 22:12:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:32.311 22:12:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.311 22:12:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:32.311 22:12:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:32.311 22:12:28 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:32.311 22:12:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:32.311 22:12:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.311 22:12:28 -- common/autotest_common.sh@10 -- # set +x 00:12:32.311 22:12:28 -- nvmf/common.sh@469 -- # nvmfpid=69906 00:12:32.311 22:12:28 -- nvmf/common.sh@470 -- # waitforlisten 69906 00:12:32.311 22:12:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:32.311 22:12:28 -- common/autotest_common.sh@829 -- # '[' -z 69906 ']' 00:12:32.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.311 22:12:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.311 22:12:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.311 22:12:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.311 22:12:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.311 22:12:28 -- common/autotest_common.sh@10 -- # set +x 00:12:32.311 [2024-11-17 22:12:28.754391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.311 [2024-11-17 22:12:28.754625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.311 [2024-11-17 22:12:28.891274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.570 [2024-11-17 22:12:28.998462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:32.570 [2024-11-17 22:12:28.998980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.570 [2024-11-17 22:12:28.999131] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.570 [2024-11-17 22:12:28.999304] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.570 [2024-11-17 22:12:28.999589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.570 [2024-11-17 22:12:28.999672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.570 [2024-11-17 22:12:28.999681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.135 22:12:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.135 22:12:29 -- common/autotest_common.sh@862 -- # return 0 00:12:33.135 22:12:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:33.135 22:12:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:33.135 22:12:29 -- common/autotest_common.sh@10 -- # set +x 00:12:33.394 22:12:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.394 22:12:29 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.394 22:12:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.394 22:12:29 -- common/autotest_common.sh@10 -- # set +x 00:12:33.394 [2024-11-17 22:12:29.767126] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.394 22:12:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.394 22:12:29 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:33.394 22:12:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.394 22:12:29 -- common/autotest_common.sh@10 -- # set +x 00:12:33.394 22:12:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.394 22:12:29 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.394 22:12:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.394 22:12:29 -- common/autotest_common.sh@10 -- # set +x 00:12:33.394 [2024-11-17 22:12:29.785372] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.394 22:12:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.394 22:12:29 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:33.394 22:12:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.394 22:12:29 -- common/autotest_common.sh@10 -- # set +x 00:12:33.394 NULL1 00:12:33.394 22:12:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.394 22:12:29 -- target/connect_stress.sh@21 -- # PERF_PID=69964 00:12:33.394 22:12:29 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:33.394 22:12:29 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:33.394 22:12:29 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.394 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.394 22:12:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:33.395 22:12:29 -- target/connect_stress.sh@28 -- # cat 00:12:33.395 22:12:29 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:33.395 22:12:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.395 22:12:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.395 22:12:29 -- common/autotest_common.sh@10 -- # set +x 00:12:33.653 22:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.653 22:12:30 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:33.653 22:12:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.653 22:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.653 22:12:30 -- common/autotest_common.sh@10 -- # set +x 00:12:34.219 22:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.219 22:12:30 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:34.219 22:12:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.219 22:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.219 22:12:30 -- common/autotest_common.sh@10 -- # set +x 00:12:34.477 22:12:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.477 22:12:30 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:34.477 22:12:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.477 22:12:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.477 22:12:30 -- common/autotest_common.sh@10 -- # set +x 00:12:34.735 22:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.735 22:12:31 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:34.735 22:12:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.735 22:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.735 22:12:31 -- common/autotest_common.sh@10 -- # set +x 00:12:34.993 22:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.993 22:12:31 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:34.993 22:12:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:34.993 22:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.993 22:12:31 -- common/autotest_common.sh@10 -- # set +x 00:12:35.278 22:12:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.278 22:12:31 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:35.278 22:12:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.278 22:12:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.278 22:12:31 -- common/autotest_common.sh@10 -- # set +x 00:12:35.610 22:12:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.610 22:12:32 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:35.611 22:12:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.611 22:12:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.611 22:12:32 -- common/autotest_common.sh@10 -- # set +x 00:12:35.876 22:12:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.876 22:12:32 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:35.876 22:12:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:35.876 22:12:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.876 22:12:32 -- common/autotest_common.sh@10 -- # set +x 00:12:36.442 22:12:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.442 22:12:32 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:36.442 22:12:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.442 22:12:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.442 22:12:32 -- common/autotest_common.sh@10 -- # set +x 00:12:36.701 22:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.701 22:12:33 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:36.701 22:12:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.701 22:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.701 22:12:33 -- common/autotest_common.sh@10 -- # set +x 00:12:36.959 22:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.959 22:12:33 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:36.959 22:12:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:36.959 22:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.959 22:12:33 -- common/autotest_common.sh@10 -- # set +x 00:12:37.218 22:12:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.218 22:12:33 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:37.218 22:12:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.218 22:12:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.218 22:12:33 -- common/autotest_common.sh@10 -- # set +x 00:12:37.477 22:12:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.477 22:12:34 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:37.477 22:12:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.477 22:12:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.477 22:12:34 -- common/autotest_common.sh@10 -- # set +x 00:12:38.044 22:12:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.044 22:12:34 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:38.044 22:12:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.044 22:12:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.044 22:12:34 -- common/autotest_common.sh@10 -- # set +x 00:12:38.302 22:12:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.302 22:12:34 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:38.302 22:12:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.302 22:12:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.302 22:12:34 -- common/autotest_common.sh@10 -- # set +x 00:12:38.561 22:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.561 22:12:35 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:38.561 22:12:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.561 22:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.561 22:12:35 -- common/autotest_common.sh@10 -- # set +x 00:12:38.820 22:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.820 22:12:35 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:38.820 22:12:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.820 22:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.820 22:12:35 -- common/autotest_common.sh@10 -- # set +x 00:12:39.388 22:12:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.388 22:12:35 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:39.388 22:12:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.388 22:12:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.388 22:12:35 -- common/autotest_common.sh@10 -- # set +x 00:12:39.648 22:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.648 22:12:36 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:39.648 22:12:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.648 22:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.648 22:12:36 -- common/autotest_common.sh@10 -- # set +x 00:12:39.907 22:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.907 22:12:36 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:39.907 22:12:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.907 22:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.907 22:12:36 -- common/autotest_common.sh@10 -- # set +x 00:12:40.166 22:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.166 22:12:36 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:40.166 22:12:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.166 22:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.166 22:12:36 -- common/autotest_common.sh@10 -- # set +x 00:12:40.425 22:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.425 22:12:36 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:40.425 22:12:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.425 22:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.425 22:12:36 -- common/autotest_common.sh@10 -- # set +x 00:12:40.992 22:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.992 22:12:37 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:40.992 22:12:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.992 22:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.992 22:12:37 -- common/autotest_common.sh@10 -- # set +x 00:12:41.251 22:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.251 22:12:37 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:41.251 22:12:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.251 22:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.251 22:12:37 -- common/autotest_common.sh@10 -- # set +x 00:12:41.509 22:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.509 22:12:37 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:41.509 22:12:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.509 22:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.509 22:12:37 -- common/autotest_common.sh@10 -- # set +x 00:12:41.768 22:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.768 22:12:38 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:41.768 22:12:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.768 22:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.768 22:12:38 -- common/autotest_common.sh@10 -- # set +x 00:12:42.027 22:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.027 22:12:38 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:42.027 22:12:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.027 22:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.027 22:12:38 -- common/autotest_common.sh@10 -- # set +x 00:12:42.596 22:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.596 22:12:38 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:42.596 22:12:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.596 22:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.596 22:12:38 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 22:12:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.854 22:12:39 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:42.854 22:12:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.854 22:12:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.854 22:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:43.112 22:12:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.112 22:12:39 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:43.112 22:12:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.112 22:12:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.112 22:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:43.370 22:12:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.371 22:12:39 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:43.371 22:12:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.371 22:12:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.371 22:12:39 -- common/autotest_common.sh@10 -- # set +x 00:12:43.371 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:43.630 22:12:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.630 22:12:40 -- target/connect_stress.sh@34 -- # kill -0 69964 00:12:43.630 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69964) - No such process 00:12:43.630 22:12:40 -- target/connect_stress.sh@38 -- # wait 69964 00:12:43.630 22:12:40 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:43.630 22:12:40 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:43.630 22:12:40 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:43.630 22:12:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:43.630 22:12:40 -- nvmf/common.sh@116 -- # sync 00:12:43.889 22:12:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:43.889 22:12:40 -- nvmf/common.sh@119 -- # set +e 00:12:43.889 22:12:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:43.889 22:12:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:43.889 rmmod nvme_tcp 00:12:43.889 rmmod nvme_fabrics 00:12:43.889 rmmod nvme_keyring 00:12:43.889 22:12:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:43.889 22:12:40 -- nvmf/common.sh@123 -- # set -e 00:12:43.889 22:12:40 -- nvmf/common.sh@124 -- # return 0 00:12:43.889 22:12:40 -- nvmf/common.sh@477 -- # '[' -n 69906 ']' 00:12:43.889 22:12:40 -- nvmf/common.sh@478 -- # killprocess 69906 00:12:43.889 22:12:40 -- common/autotest_common.sh@936 -- # '[' -z 69906 ']' 00:12:43.889 22:12:40 -- common/autotest_common.sh@940 -- # kill -0 69906 00:12:43.889 22:12:40 -- common/autotest_common.sh@941 -- # uname 00:12:43.889 22:12:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:43.889 22:12:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69906 00:12:43.889 killing process with pid 69906 00:12:43.889 22:12:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:43.889 22:12:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:43.889 22:12:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69906' 00:12:43.889 22:12:40 -- common/autotest_common.sh@955 -- # kill 69906 00:12:43.889 22:12:40 -- common/autotest_common.sh@960 -- # wait 69906 00:12:44.148 22:12:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:44.148 22:12:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:44.148 22:12:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:44.148 22:12:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.148 22:12:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:44.148 22:12:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.148 22:12:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.148 22:12:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.148 22:12:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:44.148 ************************************ 00:12:44.148 END TEST nvmf_connect_stress 00:12:44.148 ************************************ 00:12:44.148 00:12:44.148 real 0m12.608s 00:12:44.148 user 0m41.840s 00:12:44.148 sys 0m3.047s 00:12:44.148 22:12:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:44.148 22:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:44.407 22:12:40 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:44.407 22:12:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:44.407 22:12:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.407 22:12:40 -- common/autotest_common.sh@10 -- # set +x 00:12:44.407 ************************************ 00:12:44.407 START TEST nvmf_fused_ordering 00:12:44.407 ************************************ 00:12:44.407 22:12:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:44.407 * Looking for test storage... 00:12:44.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:44.407 22:12:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:44.407 22:12:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:44.407 22:12:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:44.407 22:12:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:44.407 22:12:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:44.407 22:12:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:44.407 22:12:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:44.407 22:12:40 -- scripts/common.sh@335 -- # IFS=.-: 00:12:44.407 22:12:40 -- scripts/common.sh@335 -- # read -ra ver1 00:12:44.407 22:12:40 -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.407 22:12:40 -- scripts/common.sh@336 -- # read -ra ver2 00:12:44.407 22:12:40 -- scripts/common.sh@337 -- # local 'op=<' 00:12:44.407 22:12:40 -- scripts/common.sh@339 -- # ver1_l=2 00:12:44.407 22:12:40 -- scripts/common.sh@340 -- # ver2_l=1 00:12:44.407 22:12:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:44.407 22:12:40 -- scripts/common.sh@343 -- # case "$op" in 00:12:44.407 22:12:40 -- scripts/common.sh@344 -- # : 1 00:12:44.407 22:12:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:44.407 22:12:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.407 22:12:40 -- scripts/common.sh@364 -- # decimal 1 00:12:44.407 22:12:40 -- scripts/common.sh@352 -- # local d=1 00:12:44.407 22:12:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.407 22:12:40 -- scripts/common.sh@354 -- # echo 1 00:12:44.407 22:12:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:44.407 22:12:40 -- scripts/common.sh@365 -- # decimal 2 00:12:44.407 22:12:40 -- scripts/common.sh@352 -- # local d=2 00:12:44.407 22:12:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.407 22:12:40 -- scripts/common.sh@354 -- # echo 2 00:12:44.407 22:12:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:44.407 22:12:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:44.407 22:12:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:44.407 22:12:40 -- scripts/common.sh@367 -- # return 0 00:12:44.407 22:12:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.407 22:12:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:44.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.407 --rc genhtml_branch_coverage=1 00:12:44.407 --rc genhtml_function_coverage=1 00:12:44.407 --rc genhtml_legend=1 00:12:44.407 --rc geninfo_all_blocks=1 00:12:44.407 --rc geninfo_unexecuted_blocks=1 00:12:44.407 00:12:44.407 ' 00:12:44.407 22:12:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:44.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.407 --rc genhtml_branch_coverage=1 00:12:44.407 --rc genhtml_function_coverage=1 00:12:44.407 --rc genhtml_legend=1 00:12:44.407 --rc geninfo_all_blocks=1 00:12:44.407 --rc geninfo_unexecuted_blocks=1 00:12:44.407 00:12:44.407 ' 00:12:44.407 22:12:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:44.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.407 --rc genhtml_branch_coverage=1 00:12:44.407 --rc genhtml_function_coverage=1 00:12:44.407 --rc genhtml_legend=1 00:12:44.407 --rc geninfo_all_blocks=1 00:12:44.407 --rc geninfo_unexecuted_blocks=1 00:12:44.407 00:12:44.407 ' 00:12:44.407 22:12:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:44.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.407 --rc genhtml_branch_coverage=1 00:12:44.407 --rc genhtml_function_coverage=1 00:12:44.407 --rc genhtml_legend=1 00:12:44.407 --rc geninfo_all_blocks=1 00:12:44.407 --rc geninfo_unexecuted_blocks=1 00:12:44.407 00:12:44.407 ' 00:12:44.407 22:12:40 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:44.407 22:12:40 -- nvmf/common.sh@7 -- # uname -s 00:12:44.407 22:12:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.407 22:12:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.407 22:12:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.407 22:12:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.407 22:12:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.407 22:12:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.407 22:12:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.407 22:12:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.407 22:12:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.407 22:12:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.407 22:12:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:12:44.407 22:12:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:12:44.407 22:12:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.407 22:12:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.407 22:12:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:44.407 22:12:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:44.407 22:12:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.407 22:12:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.407 22:12:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.407 22:12:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.407 22:12:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.407 22:12:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.407 22:12:40 -- paths/export.sh@5 -- # export PATH 00:12:44.408 22:12:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.408 22:12:40 -- nvmf/common.sh@46 -- # : 0 00:12:44.408 22:12:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:44.408 22:12:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:44.408 22:12:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:44.408 22:12:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.408 22:12:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.408 22:12:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:44.408 22:12:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:44.408 22:12:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:44.408 22:12:40 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:44.408 22:12:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:44.408 22:12:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.408 22:12:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:44.408 22:12:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:44.408 22:12:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:44.408 22:12:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.408 22:12:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.408 22:12:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.408 22:12:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:44.408 22:12:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:44.408 22:12:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:44.408 22:12:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:44.408 22:12:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:44.408 22:12:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:44.408 22:12:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.408 22:12:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.408 22:12:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:44.408 22:12:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:44.408 22:12:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:44.408 22:12:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:44.408 22:12:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:44.408 22:12:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.408 22:12:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:44.408 22:12:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:44.408 22:12:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:44.408 22:12:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:44.408 22:12:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:44.666 22:12:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:44.666 Cannot find device "nvmf_tgt_br" 00:12:44.667 22:12:41 -- nvmf/common.sh@154 -- # true 00:12:44.667 22:12:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:44.667 Cannot find device "nvmf_tgt_br2" 00:12:44.667 22:12:41 -- nvmf/common.sh@155 -- # true 00:12:44.667 22:12:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:44.667 22:12:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:44.667 Cannot find device "nvmf_tgt_br" 00:12:44.667 22:12:41 -- nvmf/common.sh@157 -- # true 00:12:44.667 22:12:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:44.667 Cannot find device "nvmf_tgt_br2" 00:12:44.667 22:12:41 -- nvmf/common.sh@158 -- # true 00:12:44.667 22:12:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:44.667 22:12:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:44.667 22:12:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:44.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:44.667 22:12:41 -- nvmf/common.sh@161 -- # true 00:12:44.667 22:12:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:44.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:44.667 22:12:41 -- nvmf/common.sh@162 -- # true 00:12:44.667 22:12:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:44.667 22:12:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:44.667 22:12:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:44.667 22:12:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:44.667 22:12:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:44.667 22:12:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:44.667 22:12:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:44.667 22:12:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:44.667 22:12:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:44.667 22:12:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:44.667 22:12:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:44.667 22:12:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:44.667 22:12:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:44.667 22:12:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:44.667 22:12:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:44.667 22:12:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:44.667 22:12:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:44.926 22:12:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:44.926 22:12:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:44.926 22:12:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:44.926 22:12:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:44.926 22:12:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:44.926 22:12:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:44.926 22:12:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:44.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:44.926 00:12:44.926 --- 10.0.0.2 ping statistics --- 00:12:44.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.926 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:44.926 22:12:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:44.926 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:44.926 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.150 ms 00:12:44.926 00:12:44.926 --- 10.0.0.3 ping statistics --- 00:12:44.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.926 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:12:44.926 22:12:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:44.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:44.926 00:12:44.926 --- 10.0.0.1 ping statistics --- 00:12:44.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.926 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:44.926 22:12:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.926 22:12:41 -- nvmf/common.sh@421 -- # return 0 00:12:44.926 22:12:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:44.926 22:12:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.926 22:12:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:44.926 22:12:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:44.926 22:12:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.926 22:12:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:44.926 22:12:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:44.926 22:12:41 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:44.926 22:12:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:44.926 22:12:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:44.926 22:12:41 -- common/autotest_common.sh@10 -- # set +x 00:12:44.926 22:12:41 -- nvmf/common.sh@469 -- # nvmfpid=70296 00:12:44.926 22:12:41 -- nvmf/common.sh@470 -- # waitforlisten 70296 00:12:44.926 22:12:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:44.926 22:12:41 -- common/autotest_common.sh@829 -- # '[' -z 70296 ']' 00:12:44.926 22:12:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.926 22:12:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.926 22:12:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.926 22:12:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.926 22:12:41 -- common/autotest_common.sh@10 -- # set +x 00:12:44.926 [2024-11-17 22:12:41.450245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:44.926 [2024-11-17 22:12:41.450371] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.185 [2024-11-17 22:12:41.598997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.185 [2024-11-17 22:12:41.682494] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:45.185 [2024-11-17 22:12:41.682661] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.185 [2024-11-17 22:12:41.682673] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.185 [2024-11-17 22:12:41.682681] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.185 [2024-11-17 22:12:41.682706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.753 22:12:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.753 22:12:42 -- common/autotest_common.sh@862 -- # return 0 00:12:45.753 22:12:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:45.753 22:12:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:45.753 22:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:46.012 22:12:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.012 22:12:42 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.012 22:12:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.012 22:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:46.012 [2024-11-17 22:12:42.409270] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.013 22:12:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.013 22:12:42 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:46.013 22:12:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.013 22:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:46.013 22:12:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.013 22:12:42 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.013 22:12:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.013 22:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:46.013 [2024-11-17 22:12:42.429419] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.013 22:12:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.013 22:12:42 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:46.013 22:12:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.013 22:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:46.013 NULL1 00:12:46.013 22:12:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.013 22:12:42 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:46.013 22:12:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.013 22:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:46.013 22:12:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.013 22:12:42 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:46.013 22:12:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.013 22:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:46.013 22:12:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.013 22:12:42 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:46.013 [2024-11-17 22:12:42.481017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:46.013 [2024-11-17 22:12:42.481068] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70346 ] 00:12:46.272 Attached to nqn.2016-06.io.spdk:cnode1 00:12:46.272 Namespace ID: 1 size: 1GB 00:12:46.272 fused_ordering(0) 00:12:46.272 fused_ordering(1) 00:12:46.272 fused_ordering(2) 00:12:46.272 fused_ordering(3) 00:12:46.272 fused_ordering(4) 00:12:46.272 fused_ordering(5) 00:12:46.272 fused_ordering(6) 00:12:46.272 fused_ordering(7) 00:12:46.272 fused_ordering(8) 00:12:46.272 fused_ordering(9) 00:12:46.272 fused_ordering(10) 00:12:46.272 fused_ordering(11) 00:12:46.272 fused_ordering(12) 00:12:46.272 fused_ordering(13) 00:12:46.272 fused_ordering(14) 00:12:46.272 fused_ordering(15) 00:12:46.272 fused_ordering(16) 00:12:46.272 fused_ordering(17) 00:12:46.272 fused_ordering(18) 00:12:46.272 fused_ordering(19) 00:12:46.272 fused_ordering(20) 00:12:46.272 fused_ordering(21) 00:12:46.272 fused_ordering(22) 00:12:46.272 fused_ordering(23) 00:12:46.272 fused_ordering(24) 00:12:46.272 fused_ordering(25) 00:12:46.272 fused_ordering(26) 00:12:46.272 fused_ordering(27) 00:12:46.272 fused_ordering(28) 00:12:46.272 fused_ordering(29) 00:12:46.272 fused_ordering(30) 00:12:46.272 fused_ordering(31) 00:12:46.272 fused_ordering(32) 00:12:46.272 fused_ordering(33) 00:12:46.272 fused_ordering(34) 00:12:46.272 fused_ordering(35) 00:12:46.272 fused_ordering(36) 00:12:46.272 fused_ordering(37) 00:12:46.272 fused_ordering(38) 00:12:46.272 fused_ordering(39) 00:12:46.272 fused_ordering(40) 00:12:46.272 fused_ordering(41) 00:12:46.272 fused_ordering(42) 00:12:46.272 fused_ordering(43) 00:12:46.272 fused_ordering(44) 00:12:46.272 fused_ordering(45) 00:12:46.272 fused_ordering(46) 00:12:46.272 fused_ordering(47) 00:12:46.272 fused_ordering(48) 00:12:46.272 fused_ordering(49) 00:12:46.272 fused_ordering(50) 00:12:46.272 fused_ordering(51) 00:12:46.272 fused_ordering(52) 00:12:46.272 fused_ordering(53) 00:12:46.272 fused_ordering(54) 00:12:46.272 fused_ordering(55) 00:12:46.272 fused_ordering(56) 00:12:46.272 fused_ordering(57) 00:12:46.272 fused_ordering(58) 00:12:46.272 fused_ordering(59) 00:12:46.272 fused_ordering(60) 00:12:46.272 fused_ordering(61) 00:12:46.272 fused_ordering(62) 00:12:46.272 fused_ordering(63) 00:12:46.272 fused_ordering(64) 00:12:46.272 fused_ordering(65) 00:12:46.272 fused_ordering(66) 00:12:46.272 fused_ordering(67) 00:12:46.272 fused_ordering(68) 00:12:46.272 fused_ordering(69) 00:12:46.272 fused_ordering(70) 00:12:46.272 fused_ordering(71) 00:12:46.272 fused_ordering(72) 00:12:46.272 fused_ordering(73) 00:12:46.272 fused_ordering(74) 00:12:46.272 fused_ordering(75) 00:12:46.272 fused_ordering(76) 00:12:46.272 fused_ordering(77) 00:12:46.272 fused_ordering(78) 00:12:46.272 fused_ordering(79) 00:12:46.272 fused_ordering(80) 00:12:46.272 fused_ordering(81) 00:12:46.272 fused_ordering(82) 00:12:46.272 fused_ordering(83) 00:12:46.272 fused_ordering(84) 00:12:46.272 fused_ordering(85) 00:12:46.272 fused_ordering(86) 00:12:46.272 fused_ordering(87) 00:12:46.272 fused_ordering(88) 00:12:46.272 fused_ordering(89) 00:12:46.272 fused_ordering(90) 00:12:46.272 fused_ordering(91) 00:12:46.272 fused_ordering(92) 00:12:46.272 fused_ordering(93) 00:12:46.272 fused_ordering(94) 00:12:46.272 fused_ordering(95) 00:12:46.272 fused_ordering(96) 00:12:46.272 fused_ordering(97) 00:12:46.272 fused_ordering(98) 00:12:46.272 fused_ordering(99) 00:12:46.272 fused_ordering(100) 00:12:46.272 fused_ordering(101) 00:12:46.272 fused_ordering(102) 00:12:46.272 fused_ordering(103) 00:12:46.272 fused_ordering(104) 00:12:46.272 fused_ordering(105) 00:12:46.272 fused_ordering(106) 00:12:46.272 fused_ordering(107) 00:12:46.272 fused_ordering(108) 00:12:46.272 fused_ordering(109) 00:12:46.272 fused_ordering(110) 00:12:46.272 fused_ordering(111) 00:12:46.272 fused_ordering(112) 00:12:46.272 fused_ordering(113) 00:12:46.272 fused_ordering(114) 00:12:46.272 fused_ordering(115) 00:12:46.272 fused_ordering(116) 00:12:46.272 fused_ordering(117) 00:12:46.272 fused_ordering(118) 00:12:46.272 fused_ordering(119) 00:12:46.273 fused_ordering(120) 00:12:46.273 fused_ordering(121) 00:12:46.273 fused_ordering(122) 00:12:46.273 fused_ordering(123) 00:12:46.273 fused_ordering(124) 00:12:46.273 fused_ordering(125) 00:12:46.273 fused_ordering(126) 00:12:46.273 fused_ordering(127) 00:12:46.273 fused_ordering(128) 00:12:46.273 fused_ordering(129) 00:12:46.273 fused_ordering(130) 00:12:46.273 fused_ordering(131) 00:12:46.273 fused_ordering(132) 00:12:46.273 fused_ordering(133) 00:12:46.273 fused_ordering(134) 00:12:46.273 fused_ordering(135) 00:12:46.273 fused_ordering(136) 00:12:46.273 fused_ordering(137) 00:12:46.273 fused_ordering(138) 00:12:46.273 fused_ordering(139) 00:12:46.273 fused_ordering(140) 00:12:46.273 fused_ordering(141) 00:12:46.273 fused_ordering(142) 00:12:46.273 fused_ordering(143) 00:12:46.273 fused_ordering(144) 00:12:46.273 fused_ordering(145) 00:12:46.273 fused_ordering(146) 00:12:46.273 fused_ordering(147) 00:12:46.273 fused_ordering(148) 00:12:46.273 fused_ordering(149) 00:12:46.273 fused_ordering(150) 00:12:46.273 fused_ordering(151) 00:12:46.273 fused_ordering(152) 00:12:46.273 fused_ordering(153) 00:12:46.273 fused_ordering(154) 00:12:46.273 fused_ordering(155) 00:12:46.273 fused_ordering(156) 00:12:46.273 fused_ordering(157) 00:12:46.273 fused_ordering(158) 00:12:46.273 fused_ordering(159) 00:12:46.273 fused_ordering(160) 00:12:46.273 fused_ordering(161) 00:12:46.273 fused_ordering(162) 00:12:46.273 fused_ordering(163) 00:12:46.273 fused_ordering(164) 00:12:46.273 fused_ordering(165) 00:12:46.273 fused_ordering(166) 00:12:46.273 fused_ordering(167) 00:12:46.273 fused_ordering(168) 00:12:46.273 fused_ordering(169) 00:12:46.273 fused_ordering(170) 00:12:46.273 fused_ordering(171) 00:12:46.273 fused_ordering(172) 00:12:46.273 fused_ordering(173) 00:12:46.273 fused_ordering(174) 00:12:46.273 fused_ordering(175) 00:12:46.273 fused_ordering(176) 00:12:46.273 fused_ordering(177) 00:12:46.273 fused_ordering(178) 00:12:46.273 fused_ordering(179) 00:12:46.273 fused_ordering(180) 00:12:46.273 fused_ordering(181) 00:12:46.273 fused_ordering(182) 00:12:46.273 fused_ordering(183) 00:12:46.273 fused_ordering(184) 00:12:46.273 fused_ordering(185) 00:12:46.273 fused_ordering(186) 00:12:46.273 fused_ordering(187) 00:12:46.273 fused_ordering(188) 00:12:46.273 fused_ordering(189) 00:12:46.273 fused_ordering(190) 00:12:46.273 fused_ordering(191) 00:12:46.273 fused_ordering(192) 00:12:46.273 fused_ordering(193) 00:12:46.273 fused_ordering(194) 00:12:46.273 fused_ordering(195) 00:12:46.273 fused_ordering(196) 00:12:46.273 fused_ordering(197) 00:12:46.273 fused_ordering(198) 00:12:46.273 fused_ordering(199) 00:12:46.273 fused_ordering(200) 00:12:46.273 fused_ordering(201) 00:12:46.273 fused_ordering(202) 00:12:46.273 fused_ordering(203) 00:12:46.273 fused_ordering(204) 00:12:46.273 fused_ordering(205) 00:12:46.532 fused_ordering(206) 00:12:46.532 fused_ordering(207) 00:12:46.532 fused_ordering(208) 00:12:46.532 fused_ordering(209) 00:12:46.532 fused_ordering(210) 00:12:46.532 fused_ordering(211) 00:12:46.532 fused_ordering(212) 00:12:46.532 fused_ordering(213) 00:12:46.532 fused_ordering(214) 00:12:46.532 fused_ordering(215) 00:12:46.532 fused_ordering(216) 00:12:46.532 fused_ordering(217) 00:12:46.532 fused_ordering(218) 00:12:46.532 fused_ordering(219) 00:12:46.532 fused_ordering(220) 00:12:46.532 fused_ordering(221) 00:12:46.532 fused_ordering(222) 00:12:46.532 fused_ordering(223) 00:12:46.532 fused_ordering(224) 00:12:46.532 fused_ordering(225) 00:12:46.532 fused_ordering(226) 00:12:46.532 fused_ordering(227) 00:12:46.532 fused_ordering(228) 00:12:46.532 fused_ordering(229) 00:12:46.532 fused_ordering(230) 00:12:46.532 fused_ordering(231) 00:12:46.532 fused_ordering(232) 00:12:46.532 fused_ordering(233) 00:12:46.532 fused_ordering(234) 00:12:46.532 fused_ordering(235) 00:12:46.532 fused_ordering(236) 00:12:46.532 fused_ordering(237) 00:12:46.532 fused_ordering(238) 00:12:46.532 fused_ordering(239) 00:12:46.532 fused_ordering(240) 00:12:46.532 fused_ordering(241) 00:12:46.532 fused_ordering(242) 00:12:46.532 fused_ordering(243) 00:12:46.532 fused_ordering(244) 00:12:46.532 fused_ordering(245) 00:12:46.532 fused_ordering(246) 00:12:46.532 fused_ordering(247) 00:12:46.532 fused_ordering(248) 00:12:46.532 fused_ordering(249) 00:12:46.532 fused_ordering(250) 00:12:46.532 fused_ordering(251) 00:12:46.532 fused_ordering(252) 00:12:46.532 fused_ordering(253) 00:12:46.532 fused_ordering(254) 00:12:46.532 fused_ordering(255) 00:12:46.532 fused_ordering(256) 00:12:46.532 fused_ordering(257) 00:12:46.532 fused_ordering(258) 00:12:46.532 fused_ordering(259) 00:12:46.532 fused_ordering(260) 00:12:46.532 fused_ordering(261) 00:12:46.532 fused_ordering(262) 00:12:46.532 fused_ordering(263) 00:12:46.532 fused_ordering(264) 00:12:46.532 fused_ordering(265) 00:12:46.532 fused_ordering(266) 00:12:46.532 fused_ordering(267) 00:12:46.532 fused_ordering(268) 00:12:46.532 fused_ordering(269) 00:12:46.532 fused_ordering(270) 00:12:46.532 fused_ordering(271) 00:12:46.532 fused_ordering(272) 00:12:46.532 fused_ordering(273) 00:12:46.532 fused_ordering(274) 00:12:46.532 fused_ordering(275) 00:12:46.532 fused_ordering(276) 00:12:46.532 fused_ordering(277) 00:12:46.532 fused_ordering(278) 00:12:46.532 fused_ordering(279) 00:12:46.532 fused_ordering(280) 00:12:46.532 fused_ordering(281) 00:12:46.532 fused_ordering(282) 00:12:46.532 fused_ordering(283) 00:12:46.532 fused_ordering(284) 00:12:46.532 fused_ordering(285) 00:12:46.532 fused_ordering(286) 00:12:46.532 fused_ordering(287) 00:12:46.532 fused_ordering(288) 00:12:46.532 fused_ordering(289) 00:12:46.532 fused_ordering(290) 00:12:46.532 fused_ordering(291) 00:12:46.532 fused_ordering(292) 00:12:46.532 fused_ordering(293) 00:12:46.532 fused_ordering(294) 00:12:46.532 fused_ordering(295) 00:12:46.532 fused_ordering(296) 00:12:46.532 fused_ordering(297) 00:12:46.532 fused_ordering(298) 00:12:46.532 fused_ordering(299) 00:12:46.532 fused_ordering(300) 00:12:46.532 fused_ordering(301) 00:12:46.532 fused_ordering(302) 00:12:46.532 fused_ordering(303) 00:12:46.532 fused_ordering(304) 00:12:46.532 fused_ordering(305) 00:12:46.532 fused_ordering(306) 00:12:46.532 fused_ordering(307) 00:12:46.532 fused_ordering(308) 00:12:46.532 fused_ordering(309) 00:12:46.532 fused_ordering(310) 00:12:46.532 fused_ordering(311) 00:12:46.532 fused_ordering(312) 00:12:46.532 fused_ordering(313) 00:12:46.532 fused_ordering(314) 00:12:46.532 fused_ordering(315) 00:12:46.532 fused_ordering(316) 00:12:46.532 fused_ordering(317) 00:12:46.532 fused_ordering(318) 00:12:46.532 fused_ordering(319) 00:12:46.532 fused_ordering(320) 00:12:46.532 fused_ordering(321) 00:12:46.532 fused_ordering(322) 00:12:46.532 fused_ordering(323) 00:12:46.532 fused_ordering(324) 00:12:46.532 fused_ordering(325) 00:12:46.532 fused_ordering(326) 00:12:46.532 fused_ordering(327) 00:12:46.532 fused_ordering(328) 00:12:46.532 fused_ordering(329) 00:12:46.532 fused_ordering(330) 00:12:46.532 fused_ordering(331) 00:12:46.532 fused_ordering(332) 00:12:46.532 fused_ordering(333) 00:12:46.532 fused_ordering(334) 00:12:46.532 fused_ordering(335) 00:12:46.532 fused_ordering(336) 00:12:46.532 fused_ordering(337) 00:12:46.532 fused_ordering(338) 00:12:46.532 fused_ordering(339) 00:12:46.532 fused_ordering(340) 00:12:46.532 fused_ordering(341) 00:12:46.532 fused_ordering(342) 00:12:46.532 fused_ordering(343) 00:12:46.532 fused_ordering(344) 00:12:46.532 fused_ordering(345) 00:12:46.532 fused_ordering(346) 00:12:46.532 fused_ordering(347) 00:12:46.532 fused_ordering(348) 00:12:46.532 fused_ordering(349) 00:12:46.532 fused_ordering(350) 00:12:46.532 fused_ordering(351) 00:12:46.532 fused_ordering(352) 00:12:46.532 fused_ordering(353) 00:12:46.532 fused_ordering(354) 00:12:46.532 fused_ordering(355) 00:12:46.532 fused_ordering(356) 00:12:46.532 fused_ordering(357) 00:12:46.532 fused_ordering(358) 00:12:46.532 fused_ordering(359) 00:12:46.532 fused_ordering(360) 00:12:46.532 fused_ordering(361) 00:12:46.532 fused_ordering(362) 00:12:46.532 fused_ordering(363) 00:12:46.532 fused_ordering(364) 00:12:46.532 fused_ordering(365) 00:12:46.532 fused_ordering(366) 00:12:46.532 fused_ordering(367) 00:12:46.532 fused_ordering(368) 00:12:46.532 fused_ordering(369) 00:12:46.532 fused_ordering(370) 00:12:46.532 fused_ordering(371) 00:12:46.532 fused_ordering(372) 00:12:46.532 fused_ordering(373) 00:12:46.532 fused_ordering(374) 00:12:46.532 fused_ordering(375) 00:12:46.532 fused_ordering(376) 00:12:46.532 fused_ordering(377) 00:12:46.532 fused_ordering(378) 00:12:46.532 fused_ordering(379) 00:12:46.532 fused_ordering(380) 00:12:46.532 fused_ordering(381) 00:12:46.532 fused_ordering(382) 00:12:46.532 fused_ordering(383) 00:12:46.532 fused_ordering(384) 00:12:46.532 fused_ordering(385) 00:12:46.532 fused_ordering(386) 00:12:46.533 fused_ordering(387) 00:12:46.533 fused_ordering(388) 00:12:46.533 fused_ordering(389) 00:12:46.533 fused_ordering(390) 00:12:46.533 fused_ordering(391) 00:12:46.533 fused_ordering(392) 00:12:46.533 fused_ordering(393) 00:12:46.533 fused_ordering(394) 00:12:46.533 fused_ordering(395) 00:12:46.533 fused_ordering(396) 00:12:46.533 fused_ordering(397) 00:12:46.533 fused_ordering(398) 00:12:46.533 fused_ordering(399) 00:12:46.533 fused_ordering(400) 00:12:46.533 fused_ordering(401) 00:12:46.533 fused_ordering(402) 00:12:46.533 fused_ordering(403) 00:12:46.533 fused_ordering(404) 00:12:46.533 fused_ordering(405) 00:12:46.533 fused_ordering(406) 00:12:46.533 fused_ordering(407) 00:12:46.533 fused_ordering(408) 00:12:46.533 fused_ordering(409) 00:12:46.533 fused_ordering(410) 00:12:46.791 fused_ordering(411) 00:12:46.791 fused_ordering(412) 00:12:46.791 fused_ordering(413) 00:12:46.791 fused_ordering(414) 00:12:46.791 fused_ordering(415) 00:12:46.791 fused_ordering(416) 00:12:46.791 fused_ordering(417) 00:12:46.791 fused_ordering(418) 00:12:46.791 fused_ordering(419) 00:12:46.791 fused_ordering(420) 00:12:46.791 fused_ordering(421) 00:12:46.791 fused_ordering(422) 00:12:46.791 fused_ordering(423) 00:12:46.791 fused_ordering(424) 00:12:46.791 fused_ordering(425) 00:12:46.791 fused_ordering(426) 00:12:46.791 fused_ordering(427) 00:12:46.791 fused_ordering(428) 00:12:46.791 fused_ordering(429) 00:12:46.791 fused_ordering(430) 00:12:46.791 fused_ordering(431) 00:12:46.791 fused_ordering(432) 00:12:46.791 fused_ordering(433) 00:12:46.791 fused_ordering(434) 00:12:46.791 fused_ordering(435) 00:12:46.791 fused_ordering(436) 00:12:46.791 fused_ordering(437) 00:12:46.791 fused_ordering(438) 00:12:47.050 fused_ordering(439) 00:12:47.050 fused_ordering(440) 00:12:47.050 fused_ordering(441) 00:12:47.050 fused_ordering(442) 00:12:47.050 fused_ordering(443) 00:12:47.050 fused_ordering(444) 00:12:47.050 fused_ordering(445) 00:12:47.050 fused_ordering(446) 00:12:47.050 fused_ordering(447) 00:12:47.050 fused_ordering(448) 00:12:47.050 fused_ordering(449) 00:12:47.050 fused_ordering(450) 00:12:47.050 fused_ordering(451) 00:12:47.050 fused_ordering(452) 00:12:47.050 fused_ordering(453) 00:12:47.050 fused_ordering(454) 00:12:47.050 fused_ordering(455) 00:12:47.050 fused_ordering(456) 00:12:47.050 fused_ordering(457) 00:12:47.050 fused_ordering(458) 00:12:47.050 fused_ordering(459) 00:12:47.050 fused_ordering(460) 00:12:47.050 fused_ordering(461) 00:12:47.050 fused_ordering(462) 00:12:47.050 fused_ordering(463) 00:12:47.050 fused_ordering(464) 00:12:47.050 fused_ordering(465) 00:12:47.050 fused_ordering(466) 00:12:47.050 fused_ordering(467) 00:12:47.050 fused_ordering(468) 00:12:47.050 fused_ordering(469) 00:12:47.050 fused_ordering(470) 00:12:47.050 fused_ordering(471) 00:12:47.050 fused_ordering(472) 00:12:47.050 fused_ordering(473) 00:12:47.050 fused_ordering(474) 00:12:47.050 fused_ordering(475) 00:12:47.050 fused_ordering(476) 00:12:47.050 fused_ordering(477) 00:12:47.050 fused_ordering(478) 00:12:47.050 fused_ordering(479) 00:12:47.050 fused_ordering(480) 00:12:47.050 fused_ordering(481) 00:12:47.050 fused_ordering(482) 00:12:47.050 fused_ordering(483) 00:12:47.050 fused_ordering(484) 00:12:47.050 fused_ordering(485) 00:12:47.050 fused_ordering(486) 00:12:47.050 fused_ordering(487) 00:12:47.050 fused_ordering(488) 00:12:47.050 fused_ordering(489) 00:12:47.050 fused_ordering(490) 00:12:47.050 fused_ordering(491) 00:12:47.050 fused_ordering(492) 00:12:47.050 fused_ordering(493) 00:12:47.050 fused_ordering(494) 00:12:47.050 fused_ordering(495) 00:12:47.050 fused_ordering(496) 00:12:47.050 fused_ordering(497) 00:12:47.050 fused_ordering(498) 00:12:47.050 fused_ordering(499) 00:12:47.050 fused_ordering(500) 00:12:47.050 fused_ordering(501) 00:12:47.050 fused_ordering(502) 00:12:47.050 fused_ordering(503) 00:12:47.050 fused_ordering(504) 00:12:47.050 fused_ordering(505) 00:12:47.050 fused_ordering(506) 00:12:47.050 fused_ordering(507) 00:12:47.050 fused_ordering(508) 00:12:47.050 fused_ordering(509) 00:12:47.050 fused_ordering(510) 00:12:47.050 fused_ordering(511) 00:12:47.050 fused_ordering(512) 00:12:47.050 fused_ordering(513) 00:12:47.050 fused_ordering(514) 00:12:47.050 fused_ordering(515) 00:12:47.050 fused_ordering(516) 00:12:47.050 fused_ordering(517) 00:12:47.050 fused_ordering(518) 00:12:47.050 fused_ordering(519) 00:12:47.050 fused_ordering(520) 00:12:47.050 fused_ordering(521) 00:12:47.050 fused_ordering(522) 00:12:47.050 fused_ordering(523) 00:12:47.050 fused_ordering(524) 00:12:47.050 fused_ordering(525) 00:12:47.050 fused_ordering(526) 00:12:47.050 fused_ordering(527) 00:12:47.050 fused_ordering(528) 00:12:47.050 fused_ordering(529) 00:12:47.050 fused_ordering(530) 00:12:47.050 fused_ordering(531) 00:12:47.050 fused_ordering(532) 00:12:47.050 fused_ordering(533) 00:12:47.050 fused_ordering(534) 00:12:47.050 fused_ordering(535) 00:12:47.050 fused_ordering(536) 00:12:47.050 fused_ordering(537) 00:12:47.050 fused_ordering(538) 00:12:47.050 fused_ordering(539) 00:12:47.050 fused_ordering(540) 00:12:47.050 fused_ordering(541) 00:12:47.050 fused_ordering(542) 00:12:47.050 fused_ordering(543) 00:12:47.050 fused_ordering(544) 00:12:47.050 fused_ordering(545) 00:12:47.050 fused_ordering(546) 00:12:47.050 fused_ordering(547) 00:12:47.050 fused_ordering(548) 00:12:47.050 fused_ordering(549) 00:12:47.050 fused_ordering(550) 00:12:47.050 fused_ordering(551) 00:12:47.050 fused_ordering(552) 00:12:47.050 fused_ordering(553) 00:12:47.050 fused_ordering(554) 00:12:47.050 fused_ordering(555) 00:12:47.050 fused_ordering(556) 00:12:47.050 fused_ordering(557) 00:12:47.050 fused_ordering(558) 00:12:47.050 fused_ordering(559) 00:12:47.050 fused_ordering(560) 00:12:47.050 fused_ordering(561) 00:12:47.050 fused_ordering(562) 00:12:47.050 fused_ordering(563) 00:12:47.050 fused_ordering(564) 00:12:47.050 fused_ordering(565) 00:12:47.050 fused_ordering(566) 00:12:47.050 fused_ordering(567) 00:12:47.050 fused_ordering(568) 00:12:47.050 fused_ordering(569) 00:12:47.050 fused_ordering(570) 00:12:47.050 fused_ordering(571) 00:12:47.050 fused_ordering(572) 00:12:47.050 fused_ordering(573) 00:12:47.050 fused_ordering(574) 00:12:47.050 fused_ordering(575) 00:12:47.051 fused_ordering(576) 00:12:47.051 fused_ordering(577) 00:12:47.051 fused_ordering(578) 00:12:47.051 fused_ordering(579) 00:12:47.051 fused_ordering(580) 00:12:47.051 fused_ordering(581) 00:12:47.051 fused_ordering(582) 00:12:47.051 fused_ordering(583) 00:12:47.051 fused_ordering(584) 00:12:47.051 fused_ordering(585) 00:12:47.051 fused_ordering(586) 00:12:47.051 fused_ordering(587) 00:12:47.051 fused_ordering(588) 00:12:47.051 fused_ordering(589) 00:12:47.051 fused_ordering(590) 00:12:47.051 fused_ordering(591) 00:12:47.051 fused_ordering(592) 00:12:47.051 fused_ordering(593) 00:12:47.051 fused_ordering(594) 00:12:47.051 fused_ordering(595) 00:12:47.051 fused_ordering(596) 00:12:47.051 fused_ordering(597) 00:12:47.051 fused_ordering(598) 00:12:47.051 fused_ordering(599) 00:12:47.051 fused_ordering(600) 00:12:47.051 fused_ordering(601) 00:12:47.051 fused_ordering(602) 00:12:47.051 fused_ordering(603) 00:12:47.051 fused_ordering(604) 00:12:47.051 fused_ordering(605) 00:12:47.051 fused_ordering(606) 00:12:47.051 fused_ordering(607) 00:12:47.051 fused_ordering(608) 00:12:47.051 fused_ordering(609) 00:12:47.051 fused_ordering(610) 00:12:47.051 fused_ordering(611) 00:12:47.051 fused_ordering(612) 00:12:47.051 fused_ordering(613) 00:12:47.051 fused_ordering(614) 00:12:47.051 fused_ordering(615) 00:12:47.310 fused_ordering(616) 00:12:47.310 fused_ordering(617) 00:12:47.310 fused_ordering(618) 00:12:47.310 fused_ordering(619) 00:12:47.310 fused_ordering(620) 00:12:47.310 fused_ordering(621) 00:12:47.310 fused_ordering(622) 00:12:47.310 fused_ordering(623) 00:12:47.310 fused_ordering(624) 00:12:47.310 fused_ordering(625) 00:12:47.310 fused_ordering(626) 00:12:47.310 fused_ordering(627) 00:12:47.310 fused_ordering(628) 00:12:47.310 fused_ordering(629) 00:12:47.310 fused_ordering(630) 00:12:47.310 fused_ordering(631) 00:12:47.310 fused_ordering(632) 00:12:47.310 fused_ordering(633) 00:12:47.310 fused_ordering(634) 00:12:47.310 fused_ordering(635) 00:12:47.310 fused_ordering(636) 00:12:47.310 fused_ordering(637) 00:12:47.310 fused_ordering(638) 00:12:47.310 fused_ordering(639) 00:12:47.310 fused_ordering(640) 00:12:47.310 fused_ordering(641) 00:12:47.310 fused_ordering(642) 00:12:47.310 fused_ordering(643) 00:12:47.310 fused_ordering(644) 00:12:47.310 fused_ordering(645) 00:12:47.310 fused_ordering(646) 00:12:47.310 fused_ordering(647) 00:12:47.310 fused_ordering(648) 00:12:47.310 fused_ordering(649) 00:12:47.310 fused_ordering(650) 00:12:47.310 fused_ordering(651) 00:12:47.310 fused_ordering(652) 00:12:47.310 fused_ordering(653) 00:12:47.310 fused_ordering(654) 00:12:47.310 fused_ordering(655) 00:12:47.310 fused_ordering(656) 00:12:47.310 fused_ordering(657) 00:12:47.310 fused_ordering(658) 00:12:47.310 fused_ordering(659) 00:12:47.310 fused_ordering(660) 00:12:47.310 fused_ordering(661) 00:12:47.310 fused_ordering(662) 00:12:47.310 fused_ordering(663) 00:12:47.310 fused_ordering(664) 00:12:47.310 fused_ordering(665) 00:12:47.310 fused_ordering(666) 00:12:47.310 fused_ordering(667) 00:12:47.310 fused_ordering(668) 00:12:47.310 fused_ordering(669) 00:12:47.310 fused_ordering(670) 00:12:47.310 fused_ordering(671) 00:12:47.310 fused_ordering(672) 00:12:47.310 fused_ordering(673) 00:12:47.310 fused_ordering(674) 00:12:47.310 fused_ordering(675) 00:12:47.310 fused_ordering(676) 00:12:47.310 fused_ordering(677) 00:12:47.310 fused_ordering(678) 00:12:47.310 fused_ordering(679) 00:12:47.310 fused_ordering(680) 00:12:47.310 fused_ordering(681) 00:12:47.310 fused_ordering(682) 00:12:47.310 fused_ordering(683) 00:12:47.310 fused_ordering(684) 00:12:47.310 fused_ordering(685) 00:12:47.310 fused_ordering(686) 00:12:47.310 fused_ordering(687) 00:12:47.310 fused_ordering(688) 00:12:47.310 fused_ordering(689) 00:12:47.310 fused_ordering(690) 00:12:47.310 fused_ordering(691) 00:12:47.310 fused_ordering(692) 00:12:47.310 fused_ordering(693) 00:12:47.310 fused_ordering(694) 00:12:47.310 fused_ordering(695) 00:12:47.310 fused_ordering(696) 00:12:47.310 fused_ordering(697) 00:12:47.310 fused_ordering(698) 00:12:47.310 fused_ordering(699) 00:12:47.310 fused_ordering(700) 00:12:47.310 fused_ordering(701) 00:12:47.310 fused_ordering(702) 00:12:47.310 fused_ordering(703) 00:12:47.310 fused_ordering(704) 00:12:47.310 fused_ordering(705) 00:12:47.310 fused_ordering(706) 00:12:47.310 fused_ordering(707) 00:12:47.310 fused_ordering(708) 00:12:47.310 fused_ordering(709) 00:12:47.310 fused_ordering(710) 00:12:47.310 fused_ordering(711) 00:12:47.310 fused_ordering(712) 00:12:47.310 fused_ordering(713) 00:12:47.310 fused_ordering(714) 00:12:47.310 fused_ordering(715) 00:12:47.310 fused_ordering(716) 00:12:47.310 fused_ordering(717) 00:12:47.310 fused_ordering(718) 00:12:47.310 fused_ordering(719) 00:12:47.310 fused_ordering(720) 00:12:47.310 fused_ordering(721) 00:12:47.310 fused_ordering(722) 00:12:47.310 fused_ordering(723) 00:12:47.310 fused_ordering(724) 00:12:47.310 fused_ordering(725) 00:12:47.310 fused_ordering(726) 00:12:47.310 fused_ordering(727) 00:12:47.310 fused_ordering(728) 00:12:47.310 fused_ordering(729) 00:12:47.310 fused_ordering(730) 00:12:47.310 fused_ordering(731) 00:12:47.310 fused_ordering(732) 00:12:47.310 fused_ordering(733) 00:12:47.310 fused_ordering(734) 00:12:47.310 fused_ordering(735) 00:12:47.310 fused_ordering(736) 00:12:47.310 fused_ordering(737) 00:12:47.310 fused_ordering(738) 00:12:47.310 fused_ordering(739) 00:12:47.310 fused_ordering(740) 00:12:47.310 fused_ordering(741) 00:12:47.310 fused_ordering(742) 00:12:47.310 fused_ordering(743) 00:12:47.310 fused_ordering(744) 00:12:47.310 fused_ordering(745) 00:12:47.310 fused_ordering(746) 00:12:47.310 fused_ordering(747) 00:12:47.310 fused_ordering(748) 00:12:47.310 fused_ordering(749) 00:12:47.310 fused_ordering(750) 00:12:47.310 fused_ordering(751) 00:12:47.310 fused_ordering(752) 00:12:47.310 fused_ordering(753) 00:12:47.310 fused_ordering(754) 00:12:47.310 fused_ordering(755) 00:12:47.310 fused_ordering(756) 00:12:47.310 fused_ordering(757) 00:12:47.310 fused_ordering(758) 00:12:47.310 fused_ordering(759) 00:12:47.310 fused_ordering(760) 00:12:47.310 fused_ordering(761) 00:12:47.310 fused_ordering(762) 00:12:47.310 fused_ordering(763) 00:12:47.310 fused_ordering(764) 00:12:47.310 fused_ordering(765) 00:12:47.310 fused_ordering(766) 00:12:47.310 fused_ordering(767) 00:12:47.310 fused_ordering(768) 00:12:47.310 fused_ordering(769) 00:12:47.310 fused_ordering(770) 00:12:47.310 fused_ordering(771) 00:12:47.310 fused_ordering(772) 00:12:47.310 fused_ordering(773) 00:12:47.310 fused_ordering(774) 00:12:47.310 fused_ordering(775) 00:12:47.310 fused_ordering(776) 00:12:47.310 fused_ordering(777) 00:12:47.310 fused_ordering(778) 00:12:47.310 fused_ordering(779) 00:12:47.310 fused_ordering(780) 00:12:47.310 fused_ordering(781) 00:12:47.310 fused_ordering(782) 00:12:47.310 fused_ordering(783) 00:12:47.310 fused_ordering(784) 00:12:47.310 fused_ordering(785) 00:12:47.310 fused_ordering(786) 00:12:47.310 fused_ordering(787) 00:12:47.310 fused_ordering(788) 00:12:47.310 fused_ordering(789) 00:12:47.310 fused_ordering(790) 00:12:47.310 fused_ordering(791) 00:12:47.310 fused_ordering(792) 00:12:47.310 fused_ordering(793) 00:12:47.310 fused_ordering(794) 00:12:47.310 fused_ordering(795) 00:12:47.310 fused_ordering(796) 00:12:47.310 fused_ordering(797) 00:12:47.310 fused_ordering(798) 00:12:47.310 fused_ordering(799) 00:12:47.310 fused_ordering(800) 00:12:47.310 fused_ordering(801) 00:12:47.310 fused_ordering(802) 00:12:47.310 fused_ordering(803) 00:12:47.310 fused_ordering(804) 00:12:47.310 fused_ordering(805) 00:12:47.310 fused_ordering(806) 00:12:47.310 fused_ordering(807) 00:12:47.310 fused_ordering(808) 00:12:47.310 fused_ordering(809) 00:12:47.310 fused_ordering(810) 00:12:47.310 fused_ordering(811) 00:12:47.310 fused_ordering(812) 00:12:47.310 fused_ordering(813) 00:12:47.310 fused_ordering(814) 00:12:47.310 fused_ordering(815) 00:12:47.310 fused_ordering(816) 00:12:47.310 fused_ordering(817) 00:12:47.310 fused_ordering(818) 00:12:47.310 fused_ordering(819) 00:12:47.310 fused_ordering(820) 00:12:47.878 fused_ordering(821) 00:12:47.878 fused_ordering(822) 00:12:47.878 fused_ordering(823) 00:12:47.878 fused_ordering(824) 00:12:47.878 fused_ordering(825) 00:12:47.878 fused_ordering(826) 00:12:47.878 fused_ordering(827) 00:12:47.878 fused_ordering(828) 00:12:47.878 fused_ordering(829) 00:12:47.878 fused_ordering(830) 00:12:47.878 fused_ordering(831) 00:12:47.878 fused_ordering(832) 00:12:47.878 fused_ordering(833) 00:12:47.878 fused_ordering(834) 00:12:47.878 fused_ordering(835) 00:12:47.878 fused_ordering(836) 00:12:47.878 fused_ordering(837) 00:12:47.878 fused_ordering(838) 00:12:47.878 fused_ordering(839) 00:12:47.878 fused_ordering(840) 00:12:47.878 fused_ordering(841) 00:12:47.878 fused_ordering(842) 00:12:47.878 fused_ordering(843) 00:12:47.878 fused_ordering(844) 00:12:47.878 fused_ordering(845) 00:12:47.878 fused_ordering(846) 00:12:47.878 fused_ordering(847) 00:12:47.878 fused_ordering(848) 00:12:47.878 fused_ordering(849) 00:12:47.878 fused_ordering(850) 00:12:47.878 fused_ordering(851) 00:12:47.878 fused_ordering(852) 00:12:47.878 fused_ordering(853) 00:12:47.878 fused_ordering(854) 00:12:47.878 fused_ordering(855) 00:12:47.878 fused_ordering(856) 00:12:47.878 fused_ordering(857) 00:12:47.878 fused_ordering(858) 00:12:47.878 fused_ordering(859) 00:12:47.878 fused_ordering(860) 00:12:47.878 fused_ordering(861) 00:12:47.878 fused_ordering(862) 00:12:47.878 fused_ordering(863) 00:12:47.879 fused_ordering(864) 00:12:47.879 fused_ordering(865) 00:12:47.879 fused_ordering(866) 00:12:47.879 fused_ordering(867) 00:12:47.879 fused_ordering(868) 00:12:47.879 fused_ordering(869) 00:12:47.879 fused_ordering(870) 00:12:47.879 fused_ordering(871) 00:12:47.879 fused_ordering(872) 00:12:47.879 fused_ordering(873) 00:12:47.879 fused_ordering(874) 00:12:47.879 fused_ordering(875) 00:12:47.879 fused_ordering(876) 00:12:47.879 fused_ordering(877) 00:12:47.879 fused_ordering(878) 00:12:47.879 fused_ordering(879) 00:12:47.879 fused_ordering(880) 00:12:47.879 fused_ordering(881) 00:12:47.879 fused_ordering(882) 00:12:47.879 fused_ordering(883) 00:12:47.879 fused_ordering(884) 00:12:47.879 fused_ordering(885) 00:12:47.879 fused_ordering(886) 00:12:47.879 fused_ordering(887) 00:12:47.879 fused_ordering(888) 00:12:47.879 fused_ordering(889) 00:12:47.879 fused_ordering(890) 00:12:47.879 fused_ordering(891) 00:12:47.879 fused_ordering(892) 00:12:47.879 fused_ordering(893) 00:12:47.879 fused_ordering(894) 00:12:47.879 fused_ordering(895) 00:12:47.879 fused_ordering(896) 00:12:47.879 fused_ordering(897) 00:12:47.879 fused_ordering(898) 00:12:47.879 fused_ordering(899) 00:12:47.879 fused_ordering(900) 00:12:47.879 fused_ordering(901) 00:12:47.879 fused_ordering(902) 00:12:47.879 fused_ordering(903) 00:12:47.879 fused_ordering(904) 00:12:47.879 fused_ordering(905) 00:12:47.879 fused_ordering(906) 00:12:47.879 fused_ordering(907) 00:12:47.879 fused_ordering(908) 00:12:47.879 fused_ordering(909) 00:12:47.879 fused_ordering(910) 00:12:47.879 fused_ordering(911) 00:12:47.879 fused_ordering(912) 00:12:47.879 fused_ordering(913) 00:12:47.879 fused_ordering(914) 00:12:47.879 fused_ordering(915) 00:12:47.879 fused_ordering(916) 00:12:47.879 fused_ordering(917) 00:12:47.879 fused_ordering(918) 00:12:47.879 fused_ordering(919) 00:12:47.879 fused_ordering(920) 00:12:47.879 fused_ordering(921) 00:12:47.879 fused_ordering(922) 00:12:47.879 fused_ordering(923) 00:12:47.879 fused_ordering(924) 00:12:47.879 fused_ordering(925) 00:12:47.879 fused_ordering(926) 00:12:47.879 fused_ordering(927) 00:12:47.879 fused_ordering(928) 00:12:47.879 fused_ordering(929) 00:12:47.879 fused_ordering(930) 00:12:47.879 fused_ordering(931) 00:12:47.879 fused_ordering(932) 00:12:47.879 fused_ordering(933) 00:12:47.879 fused_ordering(934) 00:12:47.879 fused_ordering(935) 00:12:47.879 fused_ordering(936) 00:12:47.879 fused_ordering(937) 00:12:47.879 fused_ordering(938) 00:12:47.879 fused_ordering(939) 00:12:47.879 fused_ordering(940) 00:12:47.879 fused_ordering(941) 00:12:47.879 fused_ordering(942) 00:12:47.879 fused_ordering(943) 00:12:47.879 fused_ordering(944) 00:12:47.879 fused_ordering(945) 00:12:47.879 fused_ordering(946) 00:12:47.879 fused_ordering(947) 00:12:47.879 fused_ordering(948) 00:12:47.879 fused_ordering(949) 00:12:47.879 fused_ordering(950) 00:12:47.879 fused_ordering(951) 00:12:47.879 fused_ordering(952) 00:12:47.879 fused_ordering(953) 00:12:47.879 fused_ordering(954) 00:12:47.879 fused_ordering(955) 00:12:47.879 fused_ordering(956) 00:12:47.879 fused_ordering(957) 00:12:47.879 fused_ordering(958) 00:12:47.879 fused_ordering(959) 00:12:47.879 fused_ordering(960) 00:12:47.879 fused_ordering(961) 00:12:47.879 fused_ordering(962) 00:12:47.879 fused_ordering(963) 00:12:47.879 fused_ordering(964) 00:12:47.879 fused_ordering(965) 00:12:47.879 fused_ordering(966) 00:12:47.879 fused_ordering(967) 00:12:47.879 fused_ordering(968) 00:12:47.879 fused_ordering(969) 00:12:47.879 fused_ordering(970) 00:12:47.879 fused_ordering(971) 00:12:47.879 fused_ordering(972) 00:12:47.879 fused_ordering(973) 00:12:47.879 fused_ordering(974) 00:12:47.879 fused_ordering(975) 00:12:47.879 fused_ordering(976) 00:12:47.879 fused_ordering(977) 00:12:47.879 fused_ordering(978) 00:12:47.879 fused_ordering(979) 00:12:47.879 fused_ordering(980) 00:12:47.879 fused_ordering(981) 00:12:47.879 fused_ordering(982) 00:12:47.879 fused_ordering(983) 00:12:47.879 fused_ordering(984) 00:12:47.879 fused_ordering(985) 00:12:47.879 fused_ordering(986) 00:12:47.879 fused_ordering(987) 00:12:47.879 fused_ordering(988) 00:12:47.879 fused_ordering(989) 00:12:47.879 fused_ordering(990) 00:12:47.879 fused_ordering(991) 00:12:47.879 fused_ordering(992) 00:12:47.879 fused_ordering(993) 00:12:47.879 fused_ordering(994) 00:12:47.879 fused_ordering(995) 00:12:47.879 fused_ordering(996) 00:12:47.879 fused_ordering(997) 00:12:47.879 fused_ordering(998) 00:12:47.879 fused_ordering(999) 00:12:47.879 fused_ordering(1000) 00:12:47.879 fused_ordering(1001) 00:12:47.879 fused_ordering(1002) 00:12:47.879 fused_ordering(1003) 00:12:47.879 fused_ordering(1004) 00:12:47.879 fused_ordering(1005) 00:12:47.879 fused_ordering(1006) 00:12:47.879 fused_ordering(1007) 00:12:47.879 fused_ordering(1008) 00:12:47.879 fused_ordering(1009) 00:12:47.879 fused_ordering(1010) 00:12:47.879 fused_ordering(1011) 00:12:47.879 fused_ordering(1012) 00:12:47.879 fused_ordering(1013) 00:12:47.879 fused_ordering(1014) 00:12:47.879 fused_ordering(1015) 00:12:47.879 fused_ordering(1016) 00:12:47.879 fused_ordering(1017) 00:12:47.879 fused_ordering(1018) 00:12:47.879 fused_ordering(1019) 00:12:47.879 fused_ordering(1020) 00:12:47.879 fused_ordering(1021) 00:12:47.879 fused_ordering(1022) 00:12:47.879 fused_ordering(1023) 00:12:47.879 22:12:44 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:47.879 22:12:44 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:47.879 22:12:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:47.879 22:12:44 -- nvmf/common.sh@116 -- # sync 00:12:47.879 22:12:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:47.879 22:12:44 -- nvmf/common.sh@119 -- # set +e 00:12:47.879 22:12:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:47.879 22:12:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:47.879 rmmod nvme_tcp 00:12:47.879 rmmod nvme_fabrics 00:12:47.879 rmmod nvme_keyring 00:12:47.879 22:12:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:47.879 22:12:44 -- nvmf/common.sh@123 -- # set -e 00:12:47.879 22:12:44 -- nvmf/common.sh@124 -- # return 0 00:12:47.879 22:12:44 -- nvmf/common.sh@477 -- # '[' -n 70296 ']' 00:12:47.879 22:12:44 -- nvmf/common.sh@478 -- # killprocess 70296 00:12:47.879 22:12:44 -- common/autotest_common.sh@936 -- # '[' -z 70296 ']' 00:12:47.879 22:12:44 -- common/autotest_common.sh@940 -- # kill -0 70296 00:12:47.879 22:12:44 -- common/autotest_common.sh@941 -- # uname 00:12:47.879 22:12:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:47.879 22:12:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70296 00:12:47.879 22:12:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:47.879 killing process with pid 70296 00:12:47.879 22:12:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:47.879 22:12:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70296' 00:12:47.879 22:12:44 -- common/autotest_common.sh@955 -- # kill 70296 00:12:47.879 22:12:44 -- common/autotest_common.sh@960 -- # wait 70296 00:12:48.138 22:12:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:48.138 22:12:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:48.138 22:12:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:48.138 22:12:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.138 22:12:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:48.138 22:12:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.138 22:12:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.138 22:12:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.138 22:12:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:48.138 00:12:48.138 real 0m3.936s 00:12:48.138 user 0m4.493s 00:12:48.138 sys 0m1.255s 00:12:48.138 22:12:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:48.138 22:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:48.138 ************************************ 00:12:48.138 END TEST nvmf_fused_ordering 00:12:48.138 ************************************ 00:12:48.396 22:12:44 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:48.396 22:12:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:48.396 22:12:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.396 22:12:44 -- common/autotest_common.sh@10 -- # set +x 00:12:48.396 ************************************ 00:12:48.396 START TEST nvmf_delete_subsystem 00:12:48.396 ************************************ 00:12:48.396 22:12:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:48.396 * Looking for test storage... 00:12:48.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.396 22:12:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:48.396 22:12:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:48.396 22:12:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:48.396 22:12:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:48.396 22:12:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:48.396 22:12:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:48.396 22:12:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:48.396 22:12:44 -- scripts/common.sh@335 -- # IFS=.-: 00:12:48.396 22:12:44 -- scripts/common.sh@335 -- # read -ra ver1 00:12:48.396 22:12:44 -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.396 22:12:44 -- scripts/common.sh@336 -- # read -ra ver2 00:12:48.396 22:12:44 -- scripts/common.sh@337 -- # local 'op=<' 00:12:48.396 22:12:44 -- scripts/common.sh@339 -- # ver1_l=2 00:12:48.396 22:12:44 -- scripts/common.sh@340 -- # ver2_l=1 00:12:48.396 22:12:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:48.396 22:12:44 -- scripts/common.sh@343 -- # case "$op" in 00:12:48.396 22:12:44 -- scripts/common.sh@344 -- # : 1 00:12:48.396 22:12:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:48.396 22:12:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.396 22:12:44 -- scripts/common.sh@364 -- # decimal 1 00:12:48.396 22:12:44 -- scripts/common.sh@352 -- # local d=1 00:12:48.396 22:12:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.396 22:12:44 -- scripts/common.sh@354 -- # echo 1 00:12:48.396 22:12:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:48.397 22:12:44 -- scripts/common.sh@365 -- # decimal 2 00:12:48.397 22:12:44 -- scripts/common.sh@352 -- # local d=2 00:12:48.397 22:12:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.397 22:12:44 -- scripts/common.sh@354 -- # echo 2 00:12:48.397 22:12:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:48.397 22:12:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:48.397 22:12:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:48.397 22:12:44 -- scripts/common.sh@367 -- # return 0 00:12:48.397 22:12:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.397 22:12:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:48.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.397 --rc genhtml_branch_coverage=1 00:12:48.397 --rc genhtml_function_coverage=1 00:12:48.397 --rc genhtml_legend=1 00:12:48.397 --rc geninfo_all_blocks=1 00:12:48.397 --rc geninfo_unexecuted_blocks=1 00:12:48.397 00:12:48.397 ' 00:12:48.397 22:12:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:48.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.397 --rc genhtml_branch_coverage=1 00:12:48.397 --rc genhtml_function_coverage=1 00:12:48.397 --rc genhtml_legend=1 00:12:48.397 --rc geninfo_all_blocks=1 00:12:48.397 --rc geninfo_unexecuted_blocks=1 00:12:48.397 00:12:48.397 ' 00:12:48.397 22:12:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:48.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.397 --rc genhtml_branch_coverage=1 00:12:48.397 --rc genhtml_function_coverage=1 00:12:48.397 --rc genhtml_legend=1 00:12:48.397 --rc geninfo_all_blocks=1 00:12:48.397 --rc geninfo_unexecuted_blocks=1 00:12:48.397 00:12:48.397 ' 00:12:48.397 22:12:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:48.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.397 --rc genhtml_branch_coverage=1 00:12:48.397 --rc genhtml_function_coverage=1 00:12:48.397 --rc genhtml_legend=1 00:12:48.397 --rc geninfo_all_blocks=1 00:12:48.397 --rc geninfo_unexecuted_blocks=1 00:12:48.397 00:12:48.397 ' 00:12:48.397 22:12:44 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.397 22:12:44 -- nvmf/common.sh@7 -- # uname -s 00:12:48.397 22:12:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.397 22:12:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.397 22:12:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.397 22:12:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.397 22:12:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.397 22:12:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.397 22:12:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.397 22:12:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.397 22:12:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.397 22:12:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.397 22:12:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:12:48.397 22:12:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:12:48.397 22:12:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.397 22:12:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.397 22:12:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.397 22:12:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.397 22:12:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.397 22:12:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.397 22:12:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.397 22:12:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.397 22:12:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.397 22:12:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.397 22:12:44 -- paths/export.sh@5 -- # export PATH 00:12:48.397 22:12:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.397 22:12:44 -- nvmf/common.sh@46 -- # : 0 00:12:48.397 22:12:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:48.397 22:12:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:48.397 22:12:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:48.397 22:12:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.397 22:12:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.397 22:12:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:48.397 22:12:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:48.397 22:12:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:48.397 22:12:44 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:48.397 22:12:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:48.397 22:12:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.397 22:12:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:48.397 22:12:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:48.397 22:12:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:48.397 22:12:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.397 22:12:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.397 22:12:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.397 22:12:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:48.397 22:12:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:48.397 22:12:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:48.397 22:12:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:48.397 22:12:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:48.397 22:12:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:48.397 22:12:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.397 22:12:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.397 22:12:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.397 22:12:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:48.397 22:12:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.397 22:12:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.397 22:12:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.397 22:12:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.397 22:12:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.397 22:12:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.397 22:12:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.397 22:12:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.397 22:12:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:48.397 22:12:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:48.656 Cannot find device "nvmf_tgt_br" 00:12:48.656 22:12:45 -- nvmf/common.sh@154 -- # true 00:12:48.656 22:12:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.656 Cannot find device "nvmf_tgt_br2" 00:12:48.656 22:12:45 -- nvmf/common.sh@155 -- # true 00:12:48.656 22:12:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:48.656 22:12:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:48.656 Cannot find device "nvmf_tgt_br" 00:12:48.656 22:12:45 -- nvmf/common.sh@157 -- # true 00:12:48.656 22:12:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:48.656 Cannot find device "nvmf_tgt_br2" 00:12:48.656 22:12:45 -- nvmf/common.sh@158 -- # true 00:12:48.656 22:12:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:48.656 22:12:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:48.656 22:12:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.656 22:12:45 -- nvmf/common.sh@161 -- # true 00:12:48.656 22:12:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.656 22:12:45 -- nvmf/common.sh@162 -- # true 00:12:48.656 22:12:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:48.656 22:12:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:48.656 22:12:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:48.656 22:12:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:48.656 22:12:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:48.656 22:12:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:48.656 22:12:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:48.656 22:12:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:48.656 22:12:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:48.656 22:12:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:48.656 22:12:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:48.656 22:12:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:48.656 22:12:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:48.656 22:12:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:48.656 22:12:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:48.915 22:12:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:48.915 22:12:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:48.915 22:12:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:48.915 22:12:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:48.915 22:12:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:48.915 22:12:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:48.915 22:12:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:48.915 22:12:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:48.915 22:12:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:48.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:12:48.915 00:12:48.915 --- 10.0.0.2 ping statistics --- 00:12:48.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.915 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:48.915 22:12:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:48.915 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:48.915 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:12:48.915 00:12:48.915 --- 10.0.0.3 ping statistics --- 00:12:48.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.915 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:48.915 22:12:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:48.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:48.915 00:12:48.915 --- 10.0.0.1 ping statistics --- 00:12:48.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.915 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:48.915 22:12:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.915 22:12:45 -- nvmf/common.sh@421 -- # return 0 00:12:48.915 22:12:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:48.915 22:12:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.915 22:12:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:48.916 22:12:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:48.916 22:12:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.916 22:12:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:48.916 22:12:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:48.916 22:12:45 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:48.916 22:12:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:48.916 22:12:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.916 22:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:48.916 22:12:45 -- nvmf/common.sh@469 -- # nvmfpid=70539 00:12:48.916 22:12:45 -- nvmf/common.sh@470 -- # waitforlisten 70539 00:12:48.916 22:12:45 -- common/autotest_common.sh@829 -- # '[' -z 70539 ']' 00:12:48.916 22:12:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.916 22:12:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.916 22:12:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:48.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.916 22:12:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.916 22:12:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.916 22:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:48.916 [2024-11-17 22:12:45.441073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:48.916 [2024-11-17 22:12:45.441171] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.183 [2024-11-17 22:12:45.583587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:49.183 [2024-11-17 22:12:45.692526] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:49.183 [2024-11-17 22:12:45.692750] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.183 [2024-11-17 22:12:45.692770] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.183 [2024-11-17 22:12:45.692782] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.183 [2024-11-17 22:12:45.692913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.183 [2024-11-17 22:12:45.692930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.805 22:12:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.805 22:12:46 -- common/autotest_common.sh@862 -- # return 0 00:12:49.805 22:12:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:49.805 22:12:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.805 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:49.805 22:12:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.805 22:12:46 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.805 22:12:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.805 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:49.805 [2024-11-17 22:12:46.357391] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.805 22:12:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.805 22:12:46 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:49.805 22:12:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.805 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:49.805 22:12:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.805 22:12:46 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.805 22:12:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.805 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:49.805 [2024-11-17 22:12:46.373769] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.805 22:12:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.805 22:12:46 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:49.805 22:12:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.805 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:49.805 NULL1 00:12:49.805 22:12:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.805 22:12:46 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:49.805 22:12:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.805 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:49.805 Delay0 00:12:49.805 22:12:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.805 22:12:46 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.805 22:12:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.805 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:49.805 22:12:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.806 22:12:46 -- target/delete_subsystem.sh@28 -- # perf_pid=70590 00:12:49.806 22:12:46 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:49.806 22:12:46 -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:50.064 [2024-11-17 22:12:46.568071] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:51.968 22:12:48 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.968 22:12:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.968 22:12:48 -- common/autotest_common.sh@10 -- # set +x 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 [2024-11-17 22:12:48.602992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be4950 is same with the state(5) to be set 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.227 starting I/O failed: -6 00:12:52.227 Write completed with error (sct=0, sc=8) 00:12:52.227 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 Write completed with error (sct=0, sc=8) 00:12:52.228 Read completed with error (sct=0, sc=8) 00:12:52.228 starting I/O failed: -6 00:12:52.228 [2024-11-17 22:12:48.605904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f453400bf20 is same with the state(5) to be set 00:12:53.165 [2024-11-17 22:12:49.580955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be55a0 is same with the state(5) to be set 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 [2024-11-17 22:12:49.603891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be37d0 is same with the state(5) to be set 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 [2024-11-17 22:12:49.604744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3d30 is same with the state(5) to be set 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 [2024-11-17 22:12:49.605319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4534000c00 is same with the state(5) to be set 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 Write completed with error (sct=0, sc=8) 00:12:53.165 Read completed with error (sct=0, sc=8) 00:12:53.165 [2024-11-17 22:12:49.606325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f453400c1d0 is same with the state(5) to be set 00:12:53.165 [2024-11-17 22:12:49.606802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be55a0 (9): Bad file descriptor 00:12:53.165 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:53.165 22:12:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.165 22:12:49 -- target/delete_subsystem.sh@34 -- # delay=0 00:12:53.165 22:12:49 -- target/delete_subsystem.sh@35 -- # kill -0 70590 00:12:53.165 22:12:49 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:53.165 Initializing NVMe Controllers 00:12:53.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:53.165 Controller IO queue size 128, less than required. 00:12:53.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:53.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:53.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:53.165 Initialization complete. Launching workers. 00:12:53.165 ======================================================== 00:12:53.165 Latency(us) 00:12:53.165 Device Information : IOPS MiB/s Average min max 00:12:53.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.87 0.08 891470.84 428.54 1011160.80 00:12:53.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.36 0.08 976468.05 442.64 2003677.52 00:12:53.165 ======================================================== 00:12:53.165 Total : 343.22 0.17 934153.96 428.54 2003677.52 00:12:53.165 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@35 -- # kill -0 70590 00:12:53.734 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70590) - No such process 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@45 -- # NOT wait 70590 00:12:53.734 22:12:50 -- common/autotest_common.sh@650 -- # local es=0 00:12:53.734 22:12:50 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 70590 00:12:53.734 22:12:50 -- common/autotest_common.sh@638 -- # local arg=wait 00:12:53.734 22:12:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.734 22:12:50 -- common/autotest_common.sh@642 -- # type -t wait 00:12:53.734 22:12:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.734 22:12:50 -- common/autotest_common.sh@653 -- # wait 70590 00:12:53.734 22:12:50 -- common/autotest_common.sh@653 -- # es=1 00:12:53.734 22:12:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:53.734 22:12:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:53.734 22:12:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:53.734 22:12:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.734 22:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:53.734 22:12:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.734 22:12:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.734 22:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:53.734 [2024-11-17 22:12:50.133266] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.734 22:12:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.734 22:12:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.734 22:12:50 -- common/autotest_common.sh@10 -- # set +x 00:12:53.734 22:12:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@54 -- # perf_pid=70633 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@56 -- # delay=0 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@57 -- # kill -0 70633 00:12:53.734 22:12:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:53.734 [2024-11-17 22:12:50.301313] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:54.303 22:12:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:54.303 22:12:50 -- target/delete_subsystem.sh@57 -- # kill -0 70633 00:12:54.303 22:12:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:54.563 22:12:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:54.563 22:12:51 -- target/delete_subsystem.sh@57 -- # kill -0 70633 00:12:54.563 22:12:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:55.130 22:12:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:55.130 22:12:51 -- target/delete_subsystem.sh@57 -- # kill -0 70633 00:12:55.130 22:12:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:55.698 22:12:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:55.698 22:12:52 -- target/delete_subsystem.sh@57 -- # kill -0 70633 00:12:55.698 22:12:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:56.266 22:12:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:56.266 22:12:52 -- target/delete_subsystem.sh@57 -- # kill -0 70633 00:12:56.266 22:12:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:56.834 22:12:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:56.834 22:12:53 -- target/delete_subsystem.sh@57 -- # kill -0 70633 00:12:56.834 22:12:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:56.834 Initializing NVMe Controllers 00:12:56.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:56.834 Controller IO queue size 128, less than required. 00:12:56.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:56.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:56.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:56.834 Initialization complete. Launching workers. 00:12:56.834 ======================================================== 00:12:56.834 Latency(us) 00:12:56.834 Device Information : IOPS MiB/s Average min max 00:12:56.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004468.41 1000181.09 1041658.76 00:12:56.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007070.44 1000255.49 1020033.14 00:12:56.834 ======================================================== 00:12:56.834 Total : 256.00 0.12 1005769.42 1000181.09 1041658.76 00:12:56.834 00:12:57.093 22:12:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:57.093 22:12:53 -- target/delete_subsystem.sh@57 -- # kill -0 70633 00:12:57.093 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70633) - No such process 00:12:57.093 22:12:53 -- target/delete_subsystem.sh@67 -- # wait 70633 00:12:57.093 22:12:53 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:57.093 22:12:53 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:57.093 22:12:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:57.093 22:12:53 -- nvmf/common.sh@116 -- # sync 00:12:57.351 22:12:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:57.351 22:12:53 -- nvmf/common.sh@119 -- # set +e 00:12:57.351 22:12:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:57.351 22:12:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:57.352 rmmod nvme_tcp 00:12:57.352 rmmod nvme_fabrics 00:12:57.352 rmmod nvme_keyring 00:12:57.352 22:12:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:57.352 22:12:53 -- nvmf/common.sh@123 -- # set -e 00:12:57.352 22:12:53 -- nvmf/common.sh@124 -- # return 0 00:12:57.352 22:12:53 -- nvmf/common.sh@477 -- # '[' -n 70539 ']' 00:12:57.352 22:12:53 -- nvmf/common.sh@478 -- # killprocess 70539 00:12:57.352 22:12:53 -- common/autotest_common.sh@936 -- # '[' -z 70539 ']' 00:12:57.352 22:12:53 -- common/autotest_common.sh@940 -- # kill -0 70539 00:12:57.352 22:12:53 -- common/autotest_common.sh@941 -- # uname 00:12:57.352 22:12:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:57.352 22:12:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70539 00:12:57.352 killing process with pid 70539 00:12:57.352 22:12:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:57.352 22:12:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:57.352 22:12:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70539' 00:12:57.352 22:12:53 -- common/autotest_common.sh@955 -- # kill 70539 00:12:57.352 22:12:53 -- common/autotest_common.sh@960 -- # wait 70539 00:12:57.610 22:12:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:57.610 22:12:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:57.610 22:12:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:57.610 22:12:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.610 22:12:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:57.610 22:12:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.610 22:12:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.610 22:12:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.610 22:12:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:57.610 00:12:57.610 real 0m9.317s 00:12:57.610 user 0m28.790s 00:12:57.610 sys 0m1.255s 00:12:57.610 22:12:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:57.610 22:12:54 -- common/autotest_common.sh@10 -- # set +x 00:12:57.610 ************************************ 00:12:57.610 END TEST nvmf_delete_subsystem 00:12:57.610 ************************************ 00:12:57.610 22:12:54 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:12:57.610 22:12:54 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:12:57.610 22:12:54 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:57.610 22:12:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:57.610 22:12:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.610 22:12:54 -- common/autotest_common.sh@10 -- # set +x 00:12:57.610 ************************************ 00:12:57.610 START TEST nvmf_vfio_user 00:12:57.610 ************************************ 00:12:57.610 22:12:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:57.870 * Looking for test storage... 00:12:57.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:57.870 22:12:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:57.870 22:12:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:57.870 22:12:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:57.870 22:12:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:57.870 22:12:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:57.870 22:12:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:57.870 22:12:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:57.870 22:12:54 -- scripts/common.sh@335 -- # IFS=.-: 00:12:57.870 22:12:54 -- scripts/common.sh@335 -- # read -ra ver1 00:12:57.870 22:12:54 -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.870 22:12:54 -- scripts/common.sh@336 -- # read -ra ver2 00:12:57.870 22:12:54 -- scripts/common.sh@337 -- # local 'op=<' 00:12:57.870 22:12:54 -- scripts/common.sh@339 -- # ver1_l=2 00:12:57.870 22:12:54 -- scripts/common.sh@340 -- # ver2_l=1 00:12:57.870 22:12:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:57.870 22:12:54 -- scripts/common.sh@343 -- # case "$op" in 00:12:57.870 22:12:54 -- scripts/common.sh@344 -- # : 1 00:12:57.870 22:12:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:57.870 22:12:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.870 22:12:54 -- scripts/common.sh@364 -- # decimal 1 00:12:57.870 22:12:54 -- scripts/common.sh@352 -- # local d=1 00:12:57.870 22:12:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.870 22:12:54 -- scripts/common.sh@354 -- # echo 1 00:12:57.870 22:12:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:57.870 22:12:54 -- scripts/common.sh@365 -- # decimal 2 00:12:57.870 22:12:54 -- scripts/common.sh@352 -- # local d=2 00:12:57.870 22:12:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.870 22:12:54 -- scripts/common.sh@354 -- # echo 2 00:12:57.870 22:12:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:57.870 22:12:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:57.870 22:12:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:57.870 22:12:54 -- scripts/common.sh@367 -- # return 0 00:12:57.870 22:12:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.870 22:12:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:57.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.870 --rc genhtml_branch_coverage=1 00:12:57.870 --rc genhtml_function_coverage=1 00:12:57.870 --rc genhtml_legend=1 00:12:57.870 --rc geninfo_all_blocks=1 00:12:57.870 --rc geninfo_unexecuted_blocks=1 00:12:57.870 00:12:57.870 ' 00:12:57.870 22:12:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:57.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.870 --rc genhtml_branch_coverage=1 00:12:57.870 --rc genhtml_function_coverage=1 00:12:57.870 --rc genhtml_legend=1 00:12:57.870 --rc geninfo_all_blocks=1 00:12:57.870 --rc geninfo_unexecuted_blocks=1 00:12:57.870 00:12:57.870 ' 00:12:57.870 22:12:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:57.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.870 --rc genhtml_branch_coverage=1 00:12:57.870 --rc genhtml_function_coverage=1 00:12:57.870 --rc genhtml_legend=1 00:12:57.870 --rc geninfo_all_blocks=1 00:12:57.870 --rc geninfo_unexecuted_blocks=1 00:12:57.870 00:12:57.870 ' 00:12:57.870 22:12:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:57.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.870 --rc genhtml_branch_coverage=1 00:12:57.870 --rc genhtml_function_coverage=1 00:12:57.870 --rc genhtml_legend=1 00:12:57.870 --rc geninfo_all_blocks=1 00:12:57.870 --rc geninfo_unexecuted_blocks=1 00:12:57.870 00:12:57.870 ' 00:12:57.870 22:12:54 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:57.870 22:12:54 -- nvmf/common.sh@7 -- # uname -s 00:12:57.870 22:12:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.870 22:12:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.870 22:12:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.870 22:12:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.870 22:12:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.870 22:12:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.870 22:12:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.870 22:12:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.870 22:12:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.870 22:12:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.870 22:12:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:12:57.870 22:12:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:12:57.870 22:12:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.870 22:12:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.870 22:12:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:57.870 22:12:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:57.870 22:12:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.870 22:12:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.870 22:12:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.870 22:12:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.870 22:12:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.870 22:12:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.870 22:12:54 -- paths/export.sh@5 -- # export PATH 00:12:57.870 22:12:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.870 22:12:54 -- nvmf/common.sh@46 -- # : 0 00:12:57.870 22:12:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:57.870 22:12:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:57.870 22:12:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:57.870 22:12:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.870 22:12:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.870 22:12:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:57.871 22:12:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:57.871 22:12:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=70770 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 70770' 00:12:57.871 Process pid: 70770 00:12:57.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:57.871 22:12:54 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 70770 00:12:57.871 22:12:54 -- common/autotest_common.sh@829 -- # '[' -z 70770 ']' 00:12:57.871 22:12:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.871 22:12:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.871 22:12:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.871 22:12:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.871 22:12:54 -- common/autotest_common.sh@10 -- # set +x 00:12:57.871 [2024-11-17 22:12:54.435947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:57.871 [2024-11-17 22:12:54.436268] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.129 [2024-11-17 22:12:54.571758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.130 [2024-11-17 22:12:54.673083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:58.130 [2024-11-17 22:12:54.673572] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.130 [2024-11-17 22:12:54.673724] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.130 [2024-11-17 22:12:54.673885] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.130 [2024-11-17 22:12:54.674043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.130 [2024-11-17 22:12:54.674112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.130 [2024-11-17 22:12:54.674221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.130 [2024-11-17 22:12:54.674254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.067 22:12:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.067 22:12:55 -- common/autotest_common.sh@862 -- # return 0 00:12:59.067 22:12:55 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:00.004 22:12:56 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:00.262 22:12:56 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:00.263 22:12:56 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:00.263 22:12:56 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:00.263 22:12:56 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:00.263 22:12:56 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:00.521 Malloc1 00:13:00.521 22:12:57 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:00.780 22:12:57 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:01.040 22:12:57 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:01.299 22:12:57 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.299 22:12:57 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:01.299 22:12:57 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:01.559 Malloc2 00:13:01.559 22:12:58 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:01.817 22:12:58 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:02.076 22:12:58 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:02.337 22:12:58 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:02.337 22:12:58 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:02.337 22:12:58 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:02.337 22:12:58 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:02.337 22:12:58 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:02.337 22:12:58 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:02.337 [2024-11-17 22:12:58.725301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:02.337 [2024-11-17 22:12:58.725372] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70907 ] 00:13:02.337 [2024-11-17 22:12:58.864763] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:02.337 [2024-11-17 22:12:58.874234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:02.337 [2024-11-17 22:12:58.874280] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f466fced000 00:13:02.337 [2024-11-17 22:12:58.875224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.337 [2024-11-17 22:12:58.876204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.337 [2024-11-17 22:12:58.877215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.337 [2024-11-17 22:12:58.878220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.337 [2024-11-17 22:12:58.879233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.337 [2024-11-17 22:12:58.880240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.337 [2024-11-17 22:12:58.881259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.337 [2024-11-17 22:12:58.882277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.337 [2024-11-17 22:12:58.883310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:02.337 [2024-11-17 22:12:58.883343] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f466f408000 00:13:02.337 [2024-11-17 22:12:58.884364] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:02.337 [2024-11-17 22:12:58.898644] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:02.337 [2024-11-17 22:12:58.898686] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:02.337 [2024-11-17 22:12:58.903450] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:02.337 [2024-11-17 22:12:58.903522] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:02.337 [2024-11-17 22:12:58.903600] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:02.337 [2024-11-17 22:12:58.903628] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:02.337 [2024-11-17 22:12:58.903635] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:02.337 [2024-11-17 22:12:58.904444] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:02.337 [2024-11-17 22:12:58.904479] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:02.337 [2024-11-17 22:12:58.904489] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:02.337 [2024-11-17 22:12:58.905459] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:02.337 [2024-11-17 22:12:58.905492] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:02.337 [2024-11-17 22:12:58.905503] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:02.337 [2024-11-17 22:12:58.906465] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:02.337 [2024-11-17 22:12:58.906499] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:02.337 [2024-11-17 22:12:58.907466] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:02.337 [2024-11-17 22:12:58.907499] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:02.337 [2024-11-17 22:12:58.907506] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:02.337 [2024-11-17 22:12:58.907515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:02.337 [2024-11-17 22:12:58.907622] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:02.337 [2024-11-17 22:12:58.907627] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:02.337 [2024-11-17 22:12:58.907633] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:02.337 [2024-11-17 22:12:58.908495] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:02.337 [2024-11-17 22:12:58.909493] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:02.337 [2024-11-17 22:12:58.910508] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:02.337 [2024-11-17 22:12:58.911539] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:02.337 [2024-11-17 22:12:58.912510] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:02.337 [2024-11-17 22:12:58.912530] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:02.337 [2024-11-17 22:12:58.912536] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:02.337 [2024-11-17 22:12:58.912556] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:02.337 [2024-11-17 22:12:58.912573] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:02.337 [2024-11-17 22:12:58.912591] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.337 [2024-11-17 22:12:58.912597] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.337 [2024-11-17 22:12:58.912615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.337 [2024-11-17 22:12:58.912702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:02.337 [2024-11-17 22:12:58.912714] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:02.337 [2024-11-17 22:12:58.912720] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:02.337 [2024-11-17 22:12:58.912724] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:02.337 [2024-11-17 22:12:58.912730] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:02.337 [2024-11-17 22:12:58.912745] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:02.337 [2024-11-17 22:12:58.912763] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:02.337 [2024-11-17 22:12:58.912768] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.912794] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.912806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.912852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.912877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.338 [2024-11-17 22:12:58.912897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.338 [2024-11-17 22:12:58.912905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.338 [2024-11-17 22:12:58.912916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.338 [2024-11-17 22:12:58.912922] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.912934] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.912944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.912954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.912962] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:02.338 [2024-11-17 22:12:58.912967] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.912975] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.912985] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.912994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.913059] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913079] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:02.338 [2024-11-17 22:12:58.913084] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:02.338 [2024-11-17 22:12:58.913095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.913131] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:02.338 [2024-11-17 22:12:58.913141] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913151] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913169] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.338 [2024-11-17 22:12:58.913173] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.338 [2024-11-17 22:12:58.913180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.913219] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913238] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.338 [2024-11-17 22:12:58.913242] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.338 [2024-11-17 22:12:58.913249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.913272] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913280] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913290] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913297] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913303] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913308] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:02.338 [2024-11-17 22:12:58.913313] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:02.338 [2024-11-17 22:12:58.913318] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:02.338 [2024-11-17 22:12:58.913337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.913364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.913389] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.913413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:02.338 [2024-11-17 22:12:58.913449] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:02.338 [2024-11-17 22:12:58.913454] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:02.338 [2024-11-17 22:12:58.913458] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:02.338 [2024-11-17 22:12:58.913461] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:02.338 [2024-11-17 22:12:58.913468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:02.338 [2024-11-17 22:12:58.913475] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:02.338 [2024-11-17 22:12:58.913479] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:02.338 [2024-11-17 22:12:58.913486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913493] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:02.338 [2024-11-17 22:12:58.913497] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.338 [2024-11-17 22:12:58.913503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913510] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:02.338 [2024-11-17 22:12:58.913514] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:02.338 [2024-11-17 22:12:58.913521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:02.338 [2024-11-17 22:12:58.913528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:02.338 ===================================================== 00:13:02.338 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:02.338 ===================================================== 00:13:02.338 Controller Capabilities/Features 00:13:02.338 ================================ 00:13:02.338 Vendor ID: 4e58 00:13:02.338 Subsystem Vendor ID: 4e58 00:13:02.338 Serial Number: SPDK1 00:13:02.338 Model Number: SPDK bdev Controller 00:13:02.338 Firmware Version: 24.01.1 00:13:02.338 Recommended Arb Burst: 6 00:13:02.338 IEEE OUI Identifier: 8d 6b 50 00:13:02.338 Multi-path I/O 00:13:02.338 May have multiple subsystem ports: Yes 00:13:02.338 May have multiple controllers: Yes 00:13:02.338 Associated with SR-IOV VF: No 00:13:02.338 Max Data Transfer Size: 131072 00:13:02.338 Max Number of Namespaces: 32 00:13:02.338 Max Number of I/O Queues: 127 00:13:02.338 NVMe Specification Version (VS): 1.3 00:13:02.338 NVMe Specification Version (Identify): 1.3 00:13:02.338 Maximum Queue Entries: 256 00:13:02.338 Contiguous Queues Required: Yes 00:13:02.338 Arbitration Mechanisms Supported 00:13:02.338 Weighted Round Robin: Not Supported 00:13:02.338 Vendor Specific: Not Supported 00:13:02.338 Reset Timeout: 15000 ms 00:13:02.338 Doorbell Stride: 4 bytes 00:13:02.338 NVM Subsystem Reset: Not Supported 00:13:02.338 Command Sets Supported 00:13:02.338 NVM Command Set: Supported 00:13:02.338 Boot Partition: Not Supported 00:13:02.338 Memory Page Size Minimum: 4096 bytes 00:13:02.339 Memory Page Size Maximum: 4096 bytes 00:13:02.339 Persistent Memory Region: Not Supported 00:13:02.339 Optional Asynchronous Events Supported 00:13:02.339 Namespace Attribute Notices: Supported 00:13:02.339 Firmware Activation Notices: Not Supported 00:13:02.339 ANA Change Notices: Not Supported 00:13:02.339 PLE Aggregate Log Change Notices: Not Supported 00:13:02.339 LBA Status Info Alert Notices: Not Supported 00:13:02.339 EGE Aggregate Log Change Notices: Not Supported 00:13:02.339 Normal NVM Subsystem Shutdown event: Not Supported 00:13:02.339 Zone Descriptor Change Notices: Not Supported 00:13:02.339 Discovery Log Change Notices: Not Supported 00:13:02.339 Controller Attributes 00:13:02.339 128-bit Host Identifier: Supported 00:13:02.339 Non-Operational Permissive Mode: Not Supported 00:13:02.339 NVM Sets: Not Supported 00:13:02.339 Read Recovery Levels: Not Supported 00:13:02.339 Endurance Groups: Not Supported 00:13:02.339 Predictable Latency Mode: Not Supported 00:13:02.339 Traffic Based Keep ALive: Not Supported 00:13:02.339 Namespace Granularity: Not Supported 00:13:02.339 SQ Associations: Not Supported 00:13:02.339 UUID List: Not Supported 00:13:02.339 Multi-Domain Subsystem: Not Supported 00:13:02.339 Fixed Capacity Management: Not Supported 00:13:02.339 Variable Capacity Management: Not Supported 00:13:02.339 Delete Endurance Group: Not Supported 00:13:02.339 Delete NVM Set: Not Supported 00:13:02.339 Extended LBA Formats Supported: Not Supported 00:13:02.339 Flexible Data Placement Supported: Not Supported 00:13:02.339 00:13:02.339 Controller Memory Buffer Support 00:13:02.339 ================================ 00:13:02.339 Supported: No 00:13:02.339 00:13:02.339 Persistent Memory Region Support 00:13:02.339 ================================ 00:13:02.339 Supported: No 00:13:02.339 00:13:02.339 Admin Command Set Attributes 00:13:02.339 ============================ 00:13:02.339 Security Send/Receive: Not Supported 00:13:02.339 Format NVM: Not Supported 00:13:02.339 Firmware Activate/Download: Not Supported 00:13:02.339 Namespace Management: Not Supported 00:13:02.339 Device Self-Test: Not Supported 00:13:02.339 Directives: Not Supported 00:13:02.339 NVMe-MI: Not Supported 00:13:02.339 Virtualization Management: Not Supported 00:13:02.339 Doorbell Buffer Config: Not Supported 00:13:02.339 Get LBA Status Capability: Not Supported 00:13:02.339 Command & Feature Lockdown Capability: Not Supported 00:13:02.339 Abort Command Limit: 4 00:13:02.339 Async Event Request Limit: 4 00:13:02.339 Number of Firmware Slots: N/A 00:13:02.339 Firmware Slot 1 Read-Only: N/A 00:13:02.339 Firmware Activation Without Reset: N/A 00:13:02.339 Multiple Update Detection Support: N/A 00:13:02.339 Firmware Update Granularity: No Information Provided 00:13:02.339 Per-Namespace SMART Log: No 00:13:02.339 Asymmetric Namespace Access Log Page: Not Supported 00:13:02.339 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:02.339 Command Effects Log Page: Supported 00:13:02.339 Get Log Page Extended Data: Supported 00:13:02.339 Telemetry Log Pages: Not Supported 00:13:02.339 Persistent Event Log Pages: Not Supported 00:13:02.339 Supported Log Pages Log Page: May Support 00:13:02.339 Commands Supported & Effects Log Page: Not Supported 00:13:02.339 Feature Identifiers & Effects Log Page:May Support 00:13:02.339 NVMe-MI Commands & Effects Log Page: May Support 00:13:02.339 Data Area 4 for Telemetry Log: Not Supported 00:13:02.339 Error Log Page Entries Supported: 128 00:13:02.339 Keep Alive: Supported 00:13:02.339 Keep Alive Granularity: 10000 ms 00:13:02.339 00:13:02.339 NVM Command Set Attributes 00:13:02.339 ========================== 00:13:02.339 Submission Queue Entry Size 00:13:02.339 Max: 64 00:13:02.339 Min: 64 00:13:02.339 Completion Queue Entry Size 00:13:02.339 Max: 16 00:13:02.339 Min: 16 00:13:02.339 Number of Namespaces: 32 00:13:02.339 Compare Command: Supported 00:13:02.339 Write Uncorrectable Command: Not Supported 00:13:02.339 Dataset Management Command: Supported 00:13:02.339 Write Zeroes Command: Supported 00:13:02.339 Set Features Save Field: Not Supported 00:13:02.339 Reservations: Not Supported 00:13:02.339 Timestamp: Not Supported 00:13:02.339 Copy: Supported 00:13:02.339 Volatile Write Cache: Present 00:13:02.339 Atomic Write Unit (Normal): 1 00:13:02.339 Atomic Write Unit (PFail): 1 00:13:02.339 Atomic Compare & Write Unit: 1 00:13:02.339 Fused Compare & Write: Supported 00:13:02.339 Scatter-Gather List 00:13:02.339 SGL Command Set: Supported (Dword aligned) 00:13:02.339 SGL Keyed: Not Supported 00:13:02.339 SGL Bit Bucket Descriptor: Not Supported 00:13:02.339 SGL Metadata Pointer: Not Supported 00:13:02.339 Oversized SGL: Not Supported 00:13:02.339 SGL Metadata Address: Not Supported 00:13:02.339 SGL Offset: Not Supported 00:13:02.339 Transport SGL Data Block: Not Supported 00:13:02.339 Replay Protected Memory Block: Not Supported 00:13:02.339 00:13:02.339 Firmware Slot Information 00:13:02.339 ========================= 00:13:02.339 Active slot: 1 00:13:02.339 Slot 1 Firmware Revision: 24.01.1 00:13:02.339 00:13:02.339 00:13:02.339 Commands Supported and Effects 00:13:02.339 ============================== 00:13:02.339 Admin Commands 00:13:02.339 -------------- 00:13:02.339 Get Log Page (02h): Supported 00:13:02.339 Identify (06h): Supported 00:13:02.339 Abort (08h): Supported 00:13:02.339 Set Features (09h): Supported 00:13:02.339 Get Features (0Ah): Supported 00:13:02.339 Asynchronous Event Request (0Ch): Supported 00:13:02.339 Keep Alive (18h): Supported 00:13:02.339 I/O Commands 00:13:02.339 ------------ 00:13:02.339 Flush (00h): Supported LBA-Change 00:13:02.339 Write (01h): Supported LBA-Change 00:13:02.339 Read (02h): Supported 00:13:02.339 Compare (05h): Supported 00:13:02.339 Write Zeroes (08h): Supported LBA-Change 00:13:02.339 Dataset Management (09h): Supported LBA-Change 00:13:02.339 Copy (19h): Supported LBA-Change 00:13:02.339 Unknown (79h): Supported LBA-Change 00:13:02.339 Unknown (7Ah): Supported 00:13:02.339 00:13:02.339 Error Log 00:13:02.339 ========= 00:13:02.339 00:13:02.339 Arbitration 00:13:02.339 =========== 00:13:02.339 Arbitration Burst: 1 00:13:02.339 00:13:02.339 Power Management 00:13:02.339 ================ 00:13:02.339 Number of Power States: 1 00:13:02.339 Current Power State: Power State #0 00:13:02.339 Power State #0: 00:13:02.339 Max Power: 0.00 W 00:13:02.339 Non-Operational State: Operational 00:13:02.339 Entry Latency: Not Reported 00:13:02.339 Exit Latency: Not Reported 00:13:02.339 Relative Read Throughput: 0 00:13:02.339 Relative Read Latency: 0 00:13:02.339 Relative Write Throughput: 0 00:13:02.339 Relative Write Latency: 0 00:13:02.339 Idle Power: Not Reported 00:13:02.339 Active Power: Not Reported 00:13:02.339 Non-Operational Permissive Mode: Not Supported 00:13:02.339 00:13:02.339 Health Information 00:13:02.339 ================== 00:13:02.339 Critical Warnings: 00:13:02.339 Available Spare Space: OK 00:13:02.339 Temperature: OK 00:13:02.339 Device Reliability: OK 00:13:02.339 Read Only: No 00:13:02.339 Volatile Memory Backup: OK 00:13:02.339 Current Temperature: 0 Kelvin[2024-11-17 22:12:58.913545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:02.339 [2024-11-17 22:12:58.913556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:02.339 [2024-11-17 22:12:58.913563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:02.339 [2024-11-17 22:12:58.913681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:02.339 [2024-11-17 22:12:58.913692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:02.340 [2024-11-17 22:12:58.913728] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:02.340 [2024-11-17 22:12:58.913752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.340 [2024-11-17 22:12:58.913760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.340 [2024-11-17 22:12:58.913767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.340 [2024-11-17 22:12:58.913773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.340 [2024-11-17 22:12:58.917760] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:02.340 [2024-11-17 22:12:58.917798] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:02.340 [2024-11-17 22:12:58.918617] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:02.340 [2024-11-17 22:12:58.918634] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:02.340 [2024-11-17 22:12:58.919556] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:02.340 [2024-11-17 22:12:58.919594] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:02.340 [2024-11-17 22:12:58.919654] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:02.340 [2024-11-17 22:12:58.921606] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:02.599 (-273 Celsius) 00:13:02.599 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:02.599 Available Spare: 0% 00:13:02.599 Available Spare Threshold: 0% 00:13:02.599 Life Percentage Used: 0% 00:13:02.599 Data Units Read: 0 00:13:02.599 Data Units Written: 0 00:13:02.599 Host Read Commands: 0 00:13:02.599 Host Write Commands: 0 00:13:02.599 Controller Busy Time: 0 minutes 00:13:02.599 Power Cycles: 0 00:13:02.599 Power On Hours: 0 hours 00:13:02.599 Unsafe Shutdowns: 0 00:13:02.599 Unrecoverable Media Errors: 0 00:13:02.599 Lifetime Error Log Entries: 0 00:13:02.599 Warning Temperature Time: 0 minutes 00:13:02.599 Critical Temperature Time: 0 minutes 00:13:02.599 00:13:02.599 Number of Queues 00:13:02.599 ================ 00:13:02.599 Number of I/O Submission Queues: 127 00:13:02.599 Number of I/O Completion Queues: 127 00:13:02.599 00:13:02.599 Active Namespaces 00:13:02.599 ================= 00:13:02.599 Namespace ID:1 00:13:02.599 Error Recovery Timeout: Unlimited 00:13:02.599 Command Set Identifier: NVM (00h) 00:13:02.599 Deallocate: Supported 00:13:02.599 Deallocated/Unwritten Error: Not Supported 00:13:02.599 Deallocated Read Value: Unknown 00:13:02.599 Deallocate in Write Zeroes: Not Supported 00:13:02.599 Deallocated Guard Field: 0xFFFF 00:13:02.599 Flush: Supported 00:13:02.599 Reservation: Supported 00:13:02.599 Namespace Sharing Capabilities: Multiple Controllers 00:13:02.599 Size (in LBAs): 131072 (0GiB) 00:13:02.599 Capacity (in LBAs): 131072 (0GiB) 00:13:02.599 Utilization (in LBAs): 131072 (0GiB) 00:13:02.599 NGUID: 235BFBE0E25D47148A35DA37B1DD850D 00:13:02.599 UUID: 235bfbe0-e25d-4714-8a35-da37b1dd850d 00:13:02.599 Thin Provisioning: Not Supported 00:13:02.599 Per-NS Atomic Units: Yes 00:13:02.599 Atomic Boundary Size (Normal): 0 00:13:02.599 Atomic Boundary Size (PFail): 0 00:13:02.599 Atomic Boundary Offset: 0 00:13:02.599 Maximum Single Source Range Length: 65535 00:13:02.599 Maximum Copy Length: 65535 00:13:02.599 Maximum Source Range Count: 1 00:13:02.599 NGUID/EUI64 Never Reused: No 00:13:02.599 Namespace Write Protected: No 00:13:02.599 Number of LBA Formats: 1 00:13:02.599 Current LBA Format: LBA Format #00 00:13:02.599 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:02.599 00:13:02.599 22:12:58 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:07.878 Initializing NVMe Controllers 00:13:07.878 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:07.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:07.878 Initialization complete. Launching workers. 00:13:07.878 ======================================================== 00:13:07.878 Latency(us) 00:13:07.878 Device Information : IOPS MiB/s Average min max 00:13:07.879 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 37955.99 148.27 3371.88 985.97 10511.16 00:13:07.879 ======================================================== 00:13:07.879 Total : 37955.99 148.27 3371.88 985.97 10511.16 00:13:07.879 00:13:07.879 22:13:04 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:13.150 Initializing NVMe Controllers 00:13:13.150 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:13.150 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:13.150 Initialization complete. Launching workers. 00:13:13.150 ======================================================== 00:13:13.150 Latency(us) 00:13:13.150 Device Information : IOPS MiB/s Average min max 00:13:13.150 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16018.95 62.57 7990.00 6029.98 15063.04 00:13:13.150 ======================================================== 00:13:13.150 Total : 16018.95 62.57 7990.00 6029.98 15063.04 00:13:13.150 00:13:13.150 22:13:09 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:18.419 Initializing NVMe Controllers 00:13:18.419 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.419 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.419 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:18.419 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:18.419 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:18.419 Initialization complete. Launching workers. 00:13:18.419 Starting thread on core 2 00:13:18.419 Starting thread on core 3 00:13:18.419 Starting thread on core 1 00:13:18.419 22:13:14 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:22.611 Initializing NVMe Controllers 00:13:22.611 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.611 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:22.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:22.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:22.611 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:22.611 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:22.611 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:22.611 Initialization complete. Launching workers. 00:13:22.611 Starting thread on core 1 with urgent priority queue 00:13:22.611 Starting thread on core 2 with urgent priority queue 00:13:22.611 Starting thread on core 3 with urgent priority queue 00:13:22.611 Starting thread on core 0 with urgent priority queue 00:13:22.611 SPDK bdev Controller (SPDK1 ) core 0: 3983.33 IO/s 25.10 secs/100000 ios 00:13:22.611 SPDK bdev Controller (SPDK1 ) core 1: 3911.33 IO/s 25.57 secs/100000 ios 00:13:22.611 SPDK bdev Controller (SPDK1 ) core 2: 4965.33 IO/s 20.14 secs/100000 ios 00:13:22.611 SPDK bdev Controller (SPDK1 ) core 3: 4549.33 IO/s 21.98 secs/100000 ios 00:13:22.611 ======================================================== 00:13:22.611 00:13:22.611 22:13:18 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:22.611 Initializing NVMe Controllers 00:13:22.611 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.611 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.611 Namespace ID: 1 size: 0GB 00:13:22.611 Initialization complete. 00:13:22.611 INFO: using host memory buffer for IO 00:13:22.611 Hello world! 00:13:22.611 22:13:18 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:23.549 Initializing NVMe Controllers 00:13:23.549 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:23.549 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:23.549 Initialization complete. Launching workers. 00:13:23.549 submit (in ns) avg, min, max = 7559.3, 3230.9, 5014323.6 00:13:23.549 complete (in ns) avg, min, max = 26032.9, 1965.5, 6137611.8 00:13:23.549 00:13:23.549 Submit histogram 00:13:23.549 ================ 00:13:23.549 Range in us Cumulative Count 00:13:23.549 3.229 - 3.244: 0.0234% ( 3) 00:13:23.549 3.244 - 3.258: 0.0937% ( 9) 00:13:23.549 3.258 - 3.273: 0.2186% ( 16) 00:13:23.549 3.273 - 3.287: 0.3357% ( 15) 00:13:23.549 3.287 - 3.302: 0.4606% ( 16) 00:13:23.549 3.302 - 3.316: 0.5543% ( 12) 00:13:23.549 3.316 - 3.331: 0.8120% ( 33) 00:13:23.549 3.331 - 3.345: 1.0540% ( 31) 00:13:23.549 3.345 - 3.360: 2.7014% ( 211) 00:13:23.549 3.360 - 3.375: 8.2136% ( 706) 00:13:23.549 3.375 - 3.389: 15.3888% ( 919) 00:13:23.549 3.389 - 3.404: 22.0956% ( 859) 00:13:23.549 3.404 - 3.418: 26.9831% ( 626) 00:13:23.549 3.418 - 3.433: 32.1362% ( 660) 00:13:23.549 3.433 - 3.447: 38.1090% ( 765) 00:13:23.549 3.447 - 3.462: 43.8710% ( 738) 00:13:23.549 3.462 - 3.476: 48.2511% ( 561) 00:13:23.549 3.476 - 3.491: 52.9747% ( 605) 00:13:23.549 3.491 - 3.505: 56.4335% ( 443) 00:13:23.549 3.505 - 3.520: 59.4785% ( 390) 00:13:23.549 3.520 - 3.535: 62.8982% ( 438) 00:13:23.549 3.535 - 3.549: 65.7636% ( 367) 00:13:23.549 3.549 - 3.564: 68.2152% ( 314) 00:13:23.549 3.564 - 3.578: 69.8470% ( 209) 00:13:23.549 3.578 - 3.593: 71.7598% ( 245) 00:13:23.549 3.593 - 3.607: 73.0247% ( 162) 00:13:23.549 3.607 - 3.622: 74.1412% ( 143) 00:13:23.549 3.622 - 3.636: 75.2733% ( 145) 00:13:23.549 3.636 - 3.651: 76.4132% ( 146) 00:13:23.549 3.651 - 3.665: 77.1861% ( 99) 00:13:23.549 3.665 - 3.680: 78.0294% ( 108) 00:13:23.549 3.680 - 3.695: 78.6227% ( 76) 00:13:23.549 3.695 - 3.709: 79.4425% ( 105) 00:13:23.549 3.709 - 3.724: 80.5590% ( 143) 00:13:23.549 3.724 - 3.753: 82.5812% ( 259) 00:13:23.549 3.753 - 3.782: 84.2911% ( 219) 00:13:23.549 3.782 - 3.811: 85.8916% ( 205) 00:13:23.549 3.811 - 3.840: 87.2423% ( 173) 00:13:23.549 3.840 - 3.869: 88.2808% ( 133) 00:13:23.549 3.869 - 3.898: 89.2021% ( 118) 00:13:23.549 3.898 - 3.927: 90.8807% ( 215) 00:13:23.549 3.927 - 3.956: 92.8716% ( 255) 00:13:23.549 3.956 - 3.985: 94.4488% ( 202) 00:13:23.549 3.985 - 4.015: 95.2686% ( 105) 00:13:23.549 4.015 - 4.044: 95.5575% ( 37) 00:13:23.549 4.044 - 4.073: 95.8073% ( 32) 00:13:23.549 4.073 - 4.102: 96.0025% ( 25) 00:13:23.549 4.102 - 4.131: 96.1352% ( 17) 00:13:23.549 4.131 - 4.160: 96.2445% ( 14) 00:13:23.549 4.160 - 4.189: 96.3070% ( 8) 00:13:23.549 4.189 - 4.218: 96.4163% ( 14) 00:13:23.549 4.218 - 4.247: 96.5646% ( 19) 00:13:23.549 4.247 - 4.276: 96.7520% ( 24) 00:13:23.549 4.276 - 4.305: 97.0331% ( 36) 00:13:23.549 4.305 - 4.335: 97.2595% ( 29) 00:13:23.549 4.335 - 4.364: 97.3766% ( 15) 00:13:23.549 4.364 - 4.393: 97.4625% ( 11) 00:13:23.549 4.393 - 4.422: 97.5406% ( 10) 00:13:23.549 4.422 - 4.451: 97.5796% ( 5) 00:13:23.549 4.451 - 4.480: 97.6187% ( 5) 00:13:23.549 4.480 - 4.509: 97.6499% ( 4) 00:13:23.549 4.509 - 4.538: 97.6811% ( 4) 00:13:23.549 4.538 - 4.567: 97.7124% ( 4) 00:13:23.549 4.567 - 4.596: 97.7358% ( 3) 00:13:23.549 4.596 - 4.625: 97.7592% ( 3) 00:13:23.549 4.625 - 4.655: 97.7670% ( 1) 00:13:23.549 4.655 - 4.684: 97.7983% ( 4) 00:13:23.549 4.684 - 4.713: 97.8061% ( 1) 00:13:23.549 4.742 - 4.771: 97.8139% ( 1) 00:13:23.549 4.771 - 4.800: 97.8451% ( 4) 00:13:23.549 4.829 - 4.858: 97.8685% ( 3) 00:13:23.549 4.858 - 4.887: 97.8763% ( 1) 00:13:23.549 4.887 - 4.916: 97.8998% ( 3) 00:13:23.549 4.916 - 4.945: 97.9154% ( 2) 00:13:23.549 5.033 - 5.062: 97.9310% ( 2) 00:13:23.549 5.149 - 5.178: 97.9466% ( 2) 00:13:23.549 5.178 - 5.207: 97.9544% ( 1) 00:13:23.549 5.295 - 5.324: 97.9622% ( 1) 00:13:23.549 5.382 - 5.411: 97.9700% ( 1) 00:13:23.549 5.469 - 5.498: 97.9778% ( 1) 00:13:23.549 6.633 - 6.662: 97.9856% ( 1) 00:13:23.549 7.447 - 7.505: 97.9934% ( 1) 00:13:23.549 7.505 - 7.564: 98.0012% ( 1) 00:13:23.549 7.622 - 7.680: 98.0169% ( 2) 00:13:23.549 7.738 - 7.796: 98.0247% ( 1) 00:13:23.549 7.796 - 7.855: 98.0325% ( 1) 00:13:23.549 8.087 - 8.145: 98.0403% ( 1) 00:13:23.549 8.495 - 8.553: 98.0481% ( 1) 00:13:23.549 8.669 - 8.727: 98.0559% ( 1) 00:13:23.549 8.727 - 8.785: 98.0715% ( 2) 00:13:23.549 9.018 - 9.076: 98.0793% ( 1) 00:13:23.549 9.076 - 9.135: 98.0871% ( 1) 00:13:23.549 9.135 - 9.193: 98.1027% ( 2) 00:13:23.549 9.425 - 9.484: 98.1106% ( 1) 00:13:23.549 9.484 - 9.542: 98.1184% ( 1) 00:13:23.549 9.542 - 9.600: 98.1262% ( 1) 00:13:23.549 9.716 - 9.775: 98.1418% ( 2) 00:13:23.549 9.775 - 9.833: 98.1652% ( 3) 00:13:23.549 9.891 - 9.949: 98.1730% ( 1) 00:13:23.549 10.065 - 10.124: 98.1808% ( 1) 00:13:23.549 10.182 - 10.240: 98.1886% ( 1) 00:13:23.549 10.473 - 10.531: 98.1964% ( 1) 00:13:23.549 10.531 - 10.589: 98.2042% ( 1) 00:13:23.549 11.229 - 11.287: 98.2121% ( 1) 00:13:23.549 12.102 - 12.160: 98.2199% ( 1) 00:13:23.549 12.335 - 12.393: 98.2277% ( 1) 00:13:23.549 12.451 - 12.509: 98.2355% ( 1) 00:13:23.549 12.567 - 12.625: 98.2433% ( 1) 00:13:23.549 12.684 - 12.742: 98.2511% ( 1) 00:13:23.549 12.858 - 12.916: 98.2589% ( 1) 00:13:23.549 13.556 - 13.615: 98.2667% ( 1) 00:13:23.549 13.615 - 13.673: 98.2823% ( 2) 00:13:23.549 13.673 - 13.731: 98.2901% ( 1) 00:13:23.549 13.731 - 13.789: 98.3057% ( 2) 00:13:23.549 13.905 - 13.964: 98.3136% ( 1) 00:13:23.549 13.964 - 14.022: 98.3292% ( 2) 00:13:23.549 14.022 - 14.080: 98.3370% ( 1) 00:13:23.549 14.080 - 14.138: 98.3448% ( 1) 00:13:23.549 14.138 - 14.196: 98.3604% ( 2) 00:13:23.549 14.196 - 14.255: 98.3682% ( 1) 00:13:23.549 14.313 - 14.371: 98.3838% ( 2) 00:13:23.549 14.429 - 14.487: 98.3994% ( 2) 00:13:23.549 14.545 - 14.604: 98.4072% ( 1) 00:13:23.549 14.604 - 14.662: 98.4229% ( 2) 00:13:23.549 14.662 - 14.720: 98.4307% ( 1) 00:13:23.549 14.720 - 14.778: 98.4385% ( 1) 00:13:23.549 14.778 - 14.836: 98.4463% ( 1) 00:13:23.549 14.836 - 14.895: 98.4541% ( 1) 00:13:23.549 14.895 - 15.011: 98.4931% ( 5) 00:13:23.549 15.011 - 15.127: 98.5244% ( 4) 00:13:23.549 15.127 - 15.244: 98.5400% ( 2) 00:13:23.549 15.244 - 15.360: 98.5634% ( 3) 00:13:23.549 15.593 - 15.709: 98.5790% ( 2) 00:13:23.549 15.709 - 15.825: 98.6024% ( 3) 00:13:23.549 15.942 - 16.058: 98.6181% ( 2) 00:13:23.549 16.058 - 16.175: 98.6259% ( 1) 00:13:23.549 16.175 - 16.291: 98.6415% ( 2) 00:13:23.549 16.291 - 16.407: 98.6571% ( 2) 00:13:23.549 16.407 - 16.524: 98.6649% ( 1) 00:13:23.549 16.524 - 16.640: 98.6805% ( 2) 00:13:23.549 16.640 - 16.756: 98.6883% ( 1) 00:13:23.549 17.105 - 17.222: 98.6961% ( 1) 00:13:23.549 17.687 - 17.804: 98.7039% ( 1) 00:13:23.549 17.804 - 17.920: 98.7117% ( 1) 00:13:23.549 17.920 - 18.036: 98.7196% ( 1) 00:13:23.549 18.036 - 18.153: 98.7352% ( 2) 00:13:23.549 18.153 - 18.269: 98.7976% ( 8) 00:13:23.549 18.269 - 18.385: 98.8523% ( 7) 00:13:23.549 18.385 - 18.502: 98.8913% ( 5) 00:13:23.549 18.502 - 18.618: 98.9225% ( 4) 00:13:23.549 18.618 - 18.735: 99.0006% ( 10) 00:13:23.550 18.735 - 18.851: 99.0475% ( 6) 00:13:23.550 18.851 - 18.967: 99.0787% ( 4) 00:13:23.550 18.967 - 19.084: 99.1099% ( 4) 00:13:23.550 19.084 - 19.200: 99.1568% ( 6) 00:13:23.550 19.200 - 19.316: 99.1802% ( 3) 00:13:23.550 19.316 - 19.433: 99.2661% ( 11) 00:13:23.550 19.433 - 19.549: 99.3754% ( 14) 00:13:23.550 19.549 - 19.665: 99.4613% ( 11) 00:13:23.550 19.665 - 19.782: 99.5628% ( 13) 00:13:23.550 19.782 - 19.898: 99.6174% ( 7) 00:13:23.550 19.898 - 20.015: 99.6721% ( 7) 00:13:23.550 20.015 - 20.131: 99.6955% ( 3) 00:13:23.550 20.131 - 20.247: 99.7111% ( 2) 00:13:23.550 20.247 - 20.364: 99.7189% ( 1) 00:13:23.550 20.364 - 20.480: 99.7345% ( 2) 00:13:23.550 20.480 - 20.596: 99.7580% ( 3) 00:13:23.550 21.178 - 21.295: 99.7658% ( 1) 00:13:23.550 29.789 - 30.022: 99.7736% ( 1) 00:13:23.550 30.022 - 30.255: 99.7814% ( 1) 00:13:23.550 30.255 - 30.487: 99.7892% ( 1) 00:13:23.550 30.487 - 30.720: 99.7970% ( 1) 00:13:23.550 30.953 - 31.185: 99.8048% ( 1) 00:13:23.550 31.418 - 31.651: 99.8126% ( 1) 00:13:23.550 31.651 - 31.884: 99.8204% ( 1) 00:13:23.550 32.116 - 32.349: 99.8282% ( 1) 00:13:23.550 32.349 - 32.582: 99.8360% ( 1) 00:13:23.550 32.582 - 32.815: 99.8438% ( 1) 00:13:23.550 35.607 - 35.840: 99.8517% ( 1) 00:13:23.550 37.702 - 37.935: 99.8595% ( 1) 00:13:23.550 41.891 - 42.124: 99.8673% ( 1) 00:13:23.550 45.847 - 46.080: 99.8751% ( 1) 00:13:23.550 47.942 - 48.175: 99.8829% ( 1) 00:13:23.550 69.353 - 69.818: 99.8907% ( 1) 00:13:23.550 930.909 - 934.633: 99.8985% ( 1) 00:13:23.550 953.251 - 960.698: 99.9063% ( 1) 00:13:23.550 2874.647 - 2889.542: 99.9141% ( 1) 00:13:23.550 2964.015 - 2978.909: 99.9219% ( 1) 00:13:23.550 3008.698 - 3023.593: 99.9297% ( 1) 00:13:23.550 3053.382 - 3068.276: 99.9375% ( 1) 00:13:23.550 3842.793 - 3872.582: 99.9453% ( 1) 00:13:23.550 3961.949 - 3991.738: 99.9610% ( 2) 00:13:23.550 3991.738 - 4021.527: 99.9688% ( 1) 00:13:23.550 4021.527 - 4051.316: 99.9766% ( 1) 00:13:23.550 4140.684 - 4170.473: 99.9844% ( 1) 00:13:23.550 4974.778 - 5004.567: 99.9922% ( 1) 00:13:23.550 5004.567 - 5034.356: 100.0000% ( 1) 00:13:23.550 00:13:23.550 Complete histogram 00:13:23.550 ================== 00:13:23.550 Range in us Cumulative Count 00:13:23.550 1.964 - 1.978: 0.8432% ( 108) 00:13:23.550 1.978 - 1.993: 19.1599% ( 2346) 00:13:23.550 1.993 - 2.007: 34.8298% ( 2007) 00:13:23.550 2.007 - 2.022: 37.8592% ( 388) 00:13:23.550 2.022 - 2.036: 40.3810% ( 323) 00:13:23.550 2.036 - 2.051: 55.5746% ( 1946) 00:13:23.550 2.051 - 2.065: 61.3445% ( 739) 00:13:23.550 2.065 - 2.080: 63.3823% ( 261) 00:13:23.550 2.080 - 2.095: 66.3804% ( 384) 00:13:23.550 2.095 - 2.109: 72.5328% ( 788) 00:13:23.550 2.109 - 2.124: 75.7651% ( 414) 00:13:23.550 2.124 - 2.138: 77.1002% ( 171) 00:13:23.550 2.138 - 2.153: 78.1777% ( 138) 00:13:23.550 2.153 - 2.167: 80.9963% ( 361) 00:13:23.550 2.167 - 2.182: 82.3548% ( 174) 00:13:23.550 2.182 - 2.196: 83.3385% ( 126) 00:13:23.550 2.196 - 2.211: 83.8538% ( 66) 00:13:23.550 2.211 - 2.225: 84.8844% ( 132) 00:13:23.550 2.225 - 2.240: 85.8916% ( 129) 00:13:23.550 2.240 - 2.255: 86.4460% ( 71) 00:13:23.550 2.255 - 2.269: 86.7583% ( 40) 00:13:23.550 2.269 - 2.284: 87.1565% ( 51) 00:13:23.550 2.284 - 2.298: 87.7577% ( 77) 00:13:23.550 2.298 - 2.313: 88.2339% ( 61) 00:13:23.550 2.313 - 2.327: 88.6165% ( 49) 00:13:23.550 2.327 - 2.342: 88.9678% ( 45) 00:13:23.550 2.342 - 2.356: 89.4051% ( 56) 00:13:23.550 2.356 - 2.371: 91.8801% ( 317) 00:13:23.550 2.371 - 2.385: 94.1209% ( 287) 00:13:23.550 2.385 - 2.400: 94.8470% ( 93) 00:13:23.550 2.400 - 2.415: 95.0578% ( 27) 00:13:23.550 2.415 - 2.429: 95.2061% ( 19) 00:13:23.550 2.429 - 2.444: 95.3857% ( 23) 00:13:23.550 2.444 - 2.458: 95.5653% ( 23) 00:13:23.550 2.458 - 2.473: 95.6824% ( 15) 00:13:23.550 2.473 - 2.487: 95.7761% ( 12) 00:13:23.550 2.487 - 2.502: 95.8307% ( 7) 00:13:23.550 2.502 - 2.516: 95.8854% ( 7) 00:13:23.550 2.516 - 2.531: 95.9478% ( 8) 00:13:23.550 2.531 - 2.545: 95.9947% ( 6) 00:13:23.550 2.545 - 2.560: 96.0181% ( 3) 00:13:23.550 2.560 - 2.575: 96.0415% ( 3) 00:13:23.550 2.575 - 2.589: 96.0884% ( 6) 00:13:23.550 2.589 - 2.604: 96.1196% ( 4) 00:13:23.550 2.604 - 2.618: 96.1430% ( 3) 00:13:23.550 2.618 - 2.633: 96.1587% ( 2) 00:13:23.550 2.633 - 2.647: 96.1899% ( 4) 00:13:23.550 2.647 - 2.662: 96.2133% ( 3) 00:13:23.550 2.662 - 2.676: 96.2445% ( 4) 00:13:23.550 2.676 - 2.691: 96.2601% ( 2) 00:13:23.550 2.705 - 2.720: 96.2680% ( 1) 00:13:23.550 2.735 - 2.749: 96.2758% ( 1) 00:13:23.550 2.749 - 2.764: 96.2836% ( 1) 00:13:23.550 2.793 - 2.807: 96.2914% ( 1) 00:13:23.550 2.924 - 2.938: 96.2992% ( 1) 00:13:23.550 2.938 - 2.953: 96.3070% ( 1) 00:13:23.550 3.171 - 3.185: 96.3148% ( 1) 00:13:23.550 3.200 - 3.215: 96.3226% ( 1) 00:13:23.550 3.302 - 3.316: 96.3304% ( 1) 00:13:23.550 3.331 - 3.345: 96.3382% ( 1) 00:13:23.550 3.404 - 3.418: 96.3460% ( 1) 00:13:23.550 3.447 - 3.462: 96.3538% ( 1) 00:13:23.550 3.476 - 3.491: 96.3616% ( 1) 00:13:23.550 3.491 - 3.505: 96.3695% ( 1) 00:13:23.550 3.520 - 3.535: 96.3773% ( 1) 00:13:23.550 3.535 - 3.549: 96.3851% ( 1) 00:13:23.550 3.578 - 3.593: 96.3929% ( 1) 00:13:23.550 3.636 - 3.651: 96.4007% ( 1) 00:13:23.550 3.651 - 3.665: 96.4085% ( 1) 00:13:23.550 3.695 - 3.709: 96.4241% ( 2) 00:13:23.550 3.709 - 3.724: 96.4397% ( 2) 00:13:23.550 3.724 - 3.753: 96.4475% ( 1) 00:13:23.550 3.753 - 3.782: 96.4788% ( 4) 00:13:23.550 3.782 - 3.811: 96.4866% ( 1) 00:13:23.550 3.840 - 3.869: 96.5178% ( 4) 00:13:23.550 3.869 - 3.898: 96.5334% ( 2) 00:13:23.550 3.927 - 3.956: 96.5490% ( 2) 00:13:23.550 3.956 - 3.985: 96.5646% ( 2) 00:13:23.550 4.015 - 4.044: 96.5959% ( 4) 00:13:23.550 4.073 - 4.102: 96.6115% ( 2) 00:13:23.550 4.160 - 4.189: 96.6193% ( 1) 00:13:23.550 4.247 - 4.276: 96.6349% ( 2) 00:13:23.550 4.305 - 4.335: 96.6427% ( 1) 00:13:23.550 4.655 - 4.684: 96.6505% ( 1) 00:13:23.550 4.684 - 4.713: 96.6583% ( 1) 00:13:23.550 4.916 - 4.945: 96.6661% ( 1) 00:13:23.550 6.284 - 6.313: 96.6740% ( 1) 00:13:23.550 6.400 - 6.429: 96.6818% ( 1) 00:13:23.550 6.487 - 6.516: 96.6896% ( 1) 00:13:23.550 6.516 - 6.545: 96.6974% ( 1) 00:13:23.550 6.604 - 6.633: 96.7052% ( 1) 00:13:23.550 6.691 - 6.720: 96.7130% ( 1) 00:13:23.550 6.749 - 6.778: 96.7208% ( 1) 00:13:23.550 6.865 - 6.895: 96.7286% ( 1) 00:13:23.550 6.982 - 7.011: 96.7364% ( 1) 00:13:23.550 7.040 - 7.069: 96.7442% ( 1) 00:13:23.550 7.127 - 7.156: 96.7520% ( 1) 00:13:23.550 7.156 - 7.185: 96.7676% ( 2) 00:13:23.550 7.418 - 7.447: 96.7755% ( 1) 00:13:23.550 7.505 - 7.564: 96.7989% ( 3) 00:13:23.550 7.913 - 7.971: 96.8067% ( 1) 00:13:23.550 7.971 - 8.029: 96.8223% ( 2) 00:13:23.550 8.087 - 8.145: 96.8301% ( 1) 00:13:23.550 8.320 - 8.378: 96.8457% ( 2) 00:13:23.550 8.436 - 8.495: 96.8613% ( 2) 00:13:23.550 8.553 - 8.611: 96.8691% ( 1) 00:13:23.550 8.844 - 8.902: 96.8770% ( 1) 00:13:23.550 8.960 - 9.018: 96.8926% ( 2) 00:13:23.550 9.135 - 9.193: 96.9082% ( 2) 00:13:23.550 9.251 - 9.309: 96.9160% ( 1) 00:13:23.550 9.309 - 9.367: 96.9316% ( 2) 00:13:23.550 9.367 - 9.425: 96.9394% ( 1) 00:13:23.550 9.425 - 9.484: 96.9472% ( 1) 00:13:23.550 9.600 - 9.658: 96.9628% ( 2) 00:13:23.550 9.658 - 9.716: 96.9706% ( 1) 00:13:23.550 9.891 - 9.949: 96.9785% ( 1) 00:13:23.550 10.007 - 10.065: 96.9863% ( 1) 00:13:23.550 10.124 - 10.182: 96.9941% ( 1) 00:13:23.550 10.240 - 10.298: 97.0019% ( 1) 00:13:23.550 10.415 - 10.473: 97.0097% ( 1) 00:13:23.550 11.113 - 11.171: 97.0175% ( 1) 00:13:23.550 11.171 - 11.229: 97.0253% ( 1) 00:13:23.550 11.345 - 11.404: 97.0331% ( 1) 00:13:23.550 12.276 - 12.335: 97.0409% ( 1) 00:13:23.550 12.393 - 12.451: 97.0487% ( 1) 00:13:23.550 12.625 - 12.684: 97.0565% ( 1) 00:13:23.550 12.684 - 12.742: 97.0643% ( 1) 00:13:23.550 13.440 - 13.498: 97.0721% ( 1) 00:13:23.550 13.905 - 13.964: 97.0800% ( 1) 00:13:23.550 14.196 - 14.255: 97.0878% ( 1) 00:13:23.550 14.429 - 14.487: 97.0956% ( 1) 00:13:23.550 14.604 - 14.662: 97.1034% ( 1) 00:13:23.550 14.836 - 14.895: 97.1112% ( 1) 00:13:23.550 15.244 - 15.360: 97.1190% ( 1) 00:13:23.550 15.825 - 15.942: 97.1268% ( 1) 00:13:23.550 16.291 - 16.407: 97.1502% ( 3) 00:13:23.551 16.407 - 16.524: 97.2049% ( 7) 00:13:23.551 16.524 - 16.640: 97.2439% ( 5) 00:13:23.551 16.640 - 16.756: 97.3298% ( 11) 00:13:23.551 16.756 - 16.873: 97.4001% ( 9) 00:13:23.551 16.873 - 16.989: 97.5562% ( 20) 00:13:23.551 16.989 - 17.105: 97.7124% ( 20) 00:13:23.551 17.105 - 17.222: 97.8529% ( 18) 00:13:23.551 17.222 - 17.338: 97.9544% ( 13) 00:13:23.551 17.338 - 17.455: 98.0091% ( 7) 00:13:23.551 17.455 - 17.571: 98.0715% ( 8) 00:13:23.551 17.571 - 17.687: 98.1262% ( 7) 00:13:23.551 17.687 - 17.804: 98.1964% ( 9) 00:13:23.551 17.804 - 17.920: 98.2823% ( 11) 00:13:23.551 17.920 - 18.036: 98.4697% ( 24) 00:13:23.551 18.036 - 18.153: 98.7196% ( 32) 00:13:23.551 18.153 - 18.269: 98.8991% ( 23) 00:13:23.551 18.269 - 18.385: 98.9538% ( 7) 00:13:23.551 18.385 - 18.502: 99.0553% ( 13) 00:13:23.551 18.502 - 18.618: 99.0943% ( 5) 00:13:23.551 18.618 - 18.735: 99.1334% ( 5) 00:13:23.551 18.735 - 18.851: 99.1802% ( 6) 00:13:23.551 18.967 - 19.084: 99.1958% ( 2) 00:13:23.551 19.200 - 19.316: 99.2036% ( 1) 00:13:23.551 20.713 - 20.829: 99.2114% ( 1) 00:13:23.551 22.225 - 22.342: 99.2192% ( 1) 00:13:23.551 23.156 - 23.273: 99.2270% ( 1) 00:13:23.551 24.320 - 24.436: 99.2349% ( 1) 00:13:23.551 24.785 - 24.902: 99.2427% ( 1) 00:13:23.551 28.044 - 28.160: 99.2505% ( 1) 00:13:23.551 29.324 - 29.440: 99.2583% ( 1) 00:13:23.551 30.487 - 30.720: 99.2661% ( 1) 00:13:23.551 43.753 - 43.985: 99.2739% ( 1) 00:13:23.551 43.985 - 44.218: 99.2817% ( 1) 00:13:23.551 50.036 - 50.269: 99.2895% ( 1) 00:13:23.551 878.778 - 882.502: 99.2973% ( 1) 00:13:23.551 923.462 - 927.185: 99.3051% ( 1) 00:13:23.551 938.356 - 942.080: 99.3129% ( 1) 00:13:23.551 953.251 - 960.698: 99.3207% ( 1) 00:13:23.551 968.145 - 975.593: 99.3285% ( 1) 00:13:23.551 975.593 - 983.040: 99.3364% ( 1) 00:13:23.551 983.040 - 990.487: 99.3442% ( 1) 00:13:23.551 997.935 - 1005.382: 99.3520% ( 1) 00:13:23.551 1012.829 - 1020.276: 99.3598% ( 1) 00:13:23.551 1020.276 - 1027.724: 99.3676% ( 1) 00:13:23.551 1027.724 - 1035.171: 99.3754% ( 1) 00:13:23.551 1057.513 - 1064.960: 99.3832% ( 1) 00:13:23.551 1064.960 - 1072.407: 99.3910% ( 1) 00:13:23.551 1094.749 - 1102.196: 99.3988% ( 1) 00:13:23.551 1124.538 - 1131.985: 99.4066% ( 1) 00:13:23.551 1936.291 - 1951.185: 99.4144% ( 1) 00:13:23.551 1951.185 - 1966.080: 99.4222% ( 1) 00:13:23.551 1995.869 - 2010.764: 99.4300% ( 1) 00:13:23.551 2010.764 - 2025.658: 99.4457% ( 2) 00:13:23.551 2040.553 - 2055.447: 99.4535% ( 1) 00:13:23.551 2070.342 - 2085.236: 99.4613% ( 1) 00:13:23.551 2085.236 - 2100.131: 99.4769% ( 2) 00:13:23.551 2115.025 - 2129.920: 99.4847% ( 1) 00:13:23.551 2964.015 - 2978.909: 99.4925% ( 1) 00:13:23.551 2993.804 - 3008.698: 99.5003% ( 1) 00:13:23.551 3008.698 - 3023.593: 99.5081% ( 1) 00:13:23.551 3023.593 - 3038.487: 99.5315% ( 3) 00:13:23.551 3038.487 - 3053.382: 99.5628% ( 4) 00:13:23.551 3053.382 - 3068.276: 99.5862% ( 3) 00:13:23.551 3083.171 - 3098.065: 99.5940% ( 1) 00:13:23.551 3902.371 - 3932.160: 99.6096% ( 2) 00:13:23.551 3932.160 - 3961.949: 99.6799% ( 9) 00:13:23.551 3961.949 - 3991.738: 99.7423% ( 8) 00:13:23.551 3991.738 - 4021.527: 99.8204% ( 10) 00:13:23.551 4021.527 - 4051.316: 99.8673% ( 6) 00:13:23.551 4051.316 - 4081.105: 99.8829% ( 2) 00:13:23.551 4081.105 - 4110.895: 99.9063% ( 3) 00:13:23.551 4110.895 - 4140.684: 99.9141% ( 1) 00:13:23.551 4974.778 - 5004.567: 99.9219% ( 1) 00:13:23.551 5004.567 - 5034.356: 99.9453% ( 3) 00:13:23.551 5034.356 - 5064.145: 99.9610% ( 2) 00:13:23.551 5898.240 - 5928.029: 99.9688% ( 1) 00:13:23.551 5928.029 - 5957.818: 99.9766% ( 1) 00:13:23.551 5987.607 - 6017.396: 99.9922% ( 2) 00:13:23.551 6136.553 - 6166.342: 100.0000% ( 1) 00:13:23.551 00:13:23.551 22:13:20 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:23.551 22:13:20 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:23.551 22:13:20 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:23.551 22:13:20 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:23.551 22:13:20 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:23.809 [2024-11-17 22:13:20.335312] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:23.809 [ 00:13:23.809 { 00:13:23.809 "allow_any_host": true, 00:13:23.809 "hosts": [], 00:13:23.809 "listen_addresses": [], 00:13:23.809 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:23.810 "subtype": "Discovery" 00:13:23.810 }, 00:13:23.810 { 00:13:23.810 "allow_any_host": true, 00:13:23.810 "hosts": [], 00:13:23.810 "listen_addresses": [ 00:13:23.810 { 00:13:23.810 "adrfam": "IPv4", 00:13:23.810 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:23.810 "transport": "VFIOUSER", 00:13:23.810 "trsvcid": "0", 00:13:23.810 "trtype": "VFIOUSER" 00:13:23.810 } 00:13:23.810 ], 00:13:23.810 "max_cntlid": 65519, 00:13:23.810 "max_namespaces": 32, 00:13:23.810 "min_cntlid": 1, 00:13:23.810 "model_number": "SPDK bdev Controller", 00:13:23.810 "namespaces": [ 00:13:23.810 { 00:13:23.810 "bdev_name": "Malloc1", 00:13:23.810 "name": "Malloc1", 00:13:23.810 "nguid": "235BFBE0E25D47148A35DA37B1DD850D", 00:13:23.810 "nsid": 1, 00:13:23.810 "uuid": "235bfbe0-e25d-4714-8a35-da37b1dd850d" 00:13:23.810 } 00:13:23.810 ], 00:13:23.810 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:23.810 "serial_number": "SPDK1", 00:13:23.810 "subtype": "NVMe" 00:13:23.810 }, 00:13:23.810 { 00:13:23.810 "allow_any_host": true, 00:13:23.810 "hosts": [], 00:13:23.810 "listen_addresses": [ 00:13:23.810 { 00:13:23.810 "adrfam": "IPv4", 00:13:23.810 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:23.810 "transport": "VFIOUSER", 00:13:23.810 "trsvcid": "0", 00:13:23.810 "trtype": "VFIOUSER" 00:13:23.810 } 00:13:23.810 ], 00:13:23.810 "max_cntlid": 65519, 00:13:23.810 "max_namespaces": 32, 00:13:23.810 "min_cntlid": 1, 00:13:23.810 "model_number": "SPDK bdev Controller", 00:13:23.810 "namespaces": [ 00:13:23.810 { 00:13:23.810 "bdev_name": "Malloc2", 00:13:23.810 "name": "Malloc2", 00:13:23.810 "nguid": "15CD31AAC3BA455E87DB66D1345DDE67", 00:13:23.810 "nsid": 1, 00:13:23.810 "uuid": "15cd31aa-c3ba-455e-87db-66d1345dde67" 00:13:23.810 } 00:13:23.810 ], 00:13:23.810 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:23.810 "serial_number": "SPDK2", 00:13:23.810 "subtype": "NVMe" 00:13:23.810 } 00:13:23.810 ] 00:13:23.810 22:13:20 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:23.810 22:13:20 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71157 00:13:23.810 22:13:20 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:23.810 22:13:20 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:23.810 22:13:20 -- common/autotest_common.sh@1254 -- # local i=0 00:13:23.810 22:13:20 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.810 22:13:20 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:13:23.810 22:13:20 -- common/autotest_common.sh@1257 -- # i=1 00:13:23.810 22:13:20 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:24.069 22:13:20 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:24.069 22:13:20 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:13:24.069 22:13:20 -- common/autotest_common.sh@1257 -- # i=2 00:13:24.069 22:13:20 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:24.069 22:13:20 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:24.069 22:13:20 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:24.069 22:13:20 -- common/autotest_common.sh@1265 -- # return 0 00:13:24.069 22:13:20 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:24.069 22:13:20 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:24.328 Malloc3 00:13:24.328 22:13:20 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:24.586 22:13:21 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:24.586 Asynchronous Event Request test 00:13:24.586 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:24.586 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:24.586 Registering asynchronous event callbacks... 00:13:24.586 Starting namespace attribute notice tests for all controllers... 00:13:24.586 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:24.586 aer_cb - Changed Namespace 00:13:24.586 Cleaning up... 00:13:24.845 [ 00:13:24.845 { 00:13:24.845 "allow_any_host": true, 00:13:24.845 "hosts": [], 00:13:24.845 "listen_addresses": [], 00:13:24.845 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:24.845 "subtype": "Discovery" 00:13:24.845 }, 00:13:24.845 { 00:13:24.845 "allow_any_host": true, 00:13:24.845 "hosts": [], 00:13:24.845 "listen_addresses": [ 00:13:24.845 { 00:13:24.845 "adrfam": "IPv4", 00:13:24.845 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:24.845 "transport": "VFIOUSER", 00:13:24.845 "trsvcid": "0", 00:13:24.845 "trtype": "VFIOUSER" 00:13:24.845 } 00:13:24.845 ], 00:13:24.845 "max_cntlid": 65519, 00:13:24.845 "max_namespaces": 32, 00:13:24.845 "min_cntlid": 1, 00:13:24.845 "model_number": "SPDK bdev Controller", 00:13:24.845 "namespaces": [ 00:13:24.845 { 00:13:24.845 "bdev_name": "Malloc1", 00:13:24.845 "name": "Malloc1", 00:13:24.845 "nguid": "235BFBE0E25D47148A35DA37B1DD850D", 00:13:24.845 "nsid": 1, 00:13:24.845 "uuid": "235bfbe0-e25d-4714-8a35-da37b1dd850d" 00:13:24.845 }, 00:13:24.845 { 00:13:24.845 "bdev_name": "Malloc3", 00:13:24.845 "name": "Malloc3", 00:13:24.845 "nguid": "9AC5E237AA5C49E489C662DE8E61891B", 00:13:24.845 "nsid": 2, 00:13:24.845 "uuid": "9ac5e237-aa5c-49e4-89c6-62de8e61891b" 00:13:24.845 } 00:13:24.845 ], 00:13:24.845 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:24.845 "serial_number": "SPDK1", 00:13:24.845 "subtype": "NVMe" 00:13:24.845 }, 00:13:24.845 { 00:13:24.845 "allow_any_host": true, 00:13:24.845 "hosts": [], 00:13:24.845 "listen_addresses": [ 00:13:24.845 { 00:13:24.845 "adrfam": "IPv4", 00:13:24.845 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:24.845 "transport": "VFIOUSER", 00:13:24.845 "trsvcid": "0", 00:13:24.845 "trtype": "VFIOUSER" 00:13:24.845 } 00:13:24.845 ], 00:13:24.845 "max_cntlid": 65519, 00:13:24.845 "max_namespaces": 32, 00:13:24.845 "min_cntlid": 1, 00:13:24.845 "model_number": "SPDK bdev Controller", 00:13:24.845 "namespaces": [ 00:13:24.845 { 00:13:24.845 "bdev_name": "Malloc2", 00:13:24.845 "name": "Malloc2", 00:13:24.845 "nguid": "15CD31AAC3BA455E87DB66D1345DDE67", 00:13:24.845 "nsid": 1, 00:13:24.845 "uuid": "15cd31aa-c3ba-455e-87db-66d1345dde67" 00:13:24.845 } 00:13:24.845 ], 00:13:24.845 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:24.845 "serial_number": "SPDK2", 00:13:24.846 "subtype": "NVMe" 00:13:24.846 } 00:13:24.846 ] 00:13:24.846 22:13:21 -- target/nvmf_vfio_user.sh@44 -- # wait 71157 00:13:24.846 22:13:21 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:24.846 22:13:21 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:24.846 22:13:21 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:24.846 22:13:21 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:25.105 [2024-11-17 22:13:21.459183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:25.106 [2024-11-17 22:13:21.459228] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71194 ] 00:13:25.106 [2024-11-17 22:13:21.591121] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:25.106 [2024-11-17 22:13:21.600095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:25.106 [2024-11-17 22:13:21.600134] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdca7323000 00:13:25.106 [2024-11-17 22:13:21.601086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.106 [2024-11-17 22:13:21.602100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.106 [2024-11-17 22:13:21.603103] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.106 [2024-11-17 22:13:21.604096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:25.106 [2024-11-17 22:13:21.605113] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:25.106 [2024-11-17 22:13:21.606128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.106 [2024-11-17 22:13:21.607129] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:25.106 [2024-11-17 22:13:21.608134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:25.106 [2024-11-17 22:13:21.609139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:25.106 [2024-11-17 22:13:21.609162] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdca7318000 00:13:25.106 [2024-11-17 22:13:21.610437] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:25.106 [2024-11-17 22:13:21.629221] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:25.106 [2024-11-17 22:13:21.629261] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:25.106 [2024-11-17 22:13:21.631396] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:25.106 [2024-11-17 22:13:21.631453] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:25.106 [2024-11-17 22:13:21.631528] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:25.106 [2024-11-17 22:13:21.631554] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:25.106 [2024-11-17 22:13:21.631560] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:25.106 [2024-11-17 22:13:21.632399] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:25.106 [2024-11-17 22:13:21.632426] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:25.106 [2024-11-17 22:13:21.632438] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:25.106 [2024-11-17 22:13:21.633396] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:25.106 [2024-11-17 22:13:21.633422] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:25.106 [2024-11-17 22:13:21.633435] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:25.106 [2024-11-17 22:13:21.634413] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:25.106 [2024-11-17 22:13:21.634438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:25.106 [2024-11-17 22:13:21.635413] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:25.106 [2024-11-17 22:13:21.635435] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:25.106 [2024-11-17 22:13:21.635457] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:25.106 [2024-11-17 22:13:21.635467] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:25.106 [2024-11-17 22:13:21.635573] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:25.106 [2024-11-17 22:13:21.635579] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:25.106 [2024-11-17 22:13:21.635584] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:25.106 [2024-11-17 22:13:21.636444] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:25.106 [2024-11-17 22:13:21.637428] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:25.106 [2024-11-17 22:13:21.638450] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:25.106 [2024-11-17 22:13:21.639462] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:25.106 [2024-11-17 22:13:21.640439] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:25.106 [2024-11-17 22:13:21.640462] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:25.106 [2024-11-17 22:13:21.640469] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.640489] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:25.106 [2024-11-17 22:13:21.640506] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.640522] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:25.106 [2024-11-17 22:13:21.640528] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:25.106 [2024-11-17 22:13:21.640542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:25.106 [2024-11-17 22:13:21.646771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:25.106 [2024-11-17 22:13:21.646796] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:25.106 [2024-11-17 22:13:21.646802] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:25.106 [2024-11-17 22:13:21.646807] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:25.106 [2024-11-17 22:13:21.646812] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:25.106 [2024-11-17 22:13:21.646817] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:25.106 [2024-11-17 22:13:21.646822] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:25.106 [2024-11-17 22:13:21.646827] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.646842] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.646854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:25.106 [2024-11-17 22:13:21.654777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:25.106 [2024-11-17 22:13:21.654809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.106 [2024-11-17 22:13:21.654820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.106 [2024-11-17 22:13:21.654828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.106 [2024-11-17 22:13:21.654837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.106 [2024-11-17 22:13:21.654843] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.654855] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.654866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:25.106 [2024-11-17 22:13:21.662781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:25.106 [2024-11-17 22:13:21.662801] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:25.106 [2024-11-17 22:13:21.662808] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.662817] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.662829] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.662840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:25.106 [2024-11-17 22:13:21.669752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:25.106 [2024-11-17 22:13:21.669824] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:25.106 [2024-11-17 22:13:21.669862] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.669873] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:25.107 [2024-11-17 22:13:21.669878] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:25.107 [2024-11-17 22:13:21.669886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:25.107 [2024-11-17 22:13:21.676779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:25.107 [2024-11-17 22:13:21.676811] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:25.107 [2024-11-17 22:13:21.676825] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.676836] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.676844] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:25.107 [2024-11-17 22:13:21.676849] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:25.107 [2024-11-17 22:13:21.676856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:25.107 [2024-11-17 22:13:21.684749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:25.107 [2024-11-17 22:13:21.684781] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.684795] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.684805] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:25.107 [2024-11-17 22:13:21.684810] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:25.107 [2024-11-17 22:13:21.684817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:25.107 [2024-11-17 22:13:21.692747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:25.107 [2024-11-17 22:13:21.692772] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.692783] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.692795] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.692803] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.692808] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.692814] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:25.107 [2024-11-17 22:13:21.692819] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:25.107 [2024-11-17 22:13:21.692824] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:25.107 [2024-11-17 22:13:21.692847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:25.107 [2024-11-17 22:13:21.700752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:25.107 [2024-11-17 22:13:21.700779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:25.107 [2024-11-17 22:13:21.710851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:25.107 [2024-11-17 22:13:21.710886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:25.107 [2024-11-17 22:13:21.717825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:25.107 [2024-11-17 22:13:21.717889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:25.366 [2024-11-17 22:13:21.725821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:25.367 [2024-11-17 22:13:21.725897] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:25.367 [2024-11-17 22:13:21.725904] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:25.367 [2024-11-17 22:13:21.725908] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:25.367 [2024-11-17 22:13:21.725912] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:25.367 [2024-11-17 22:13:21.725920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:25.367 [2024-11-17 22:13:21.725929] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:25.367 [2024-11-17 22:13:21.725934] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:25.367 [2024-11-17 22:13:21.725941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:25.367 [2024-11-17 22:13:21.725949] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:25.367 [2024-11-17 22:13:21.725954] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:25.367 [2024-11-17 22:13:21.725960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:25.367 [2024-11-17 22:13:21.725969] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:25.367 [2024-11-17 22:13:21.725974] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:25.367 [2024-11-17 22:13:21.725980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:25.367 [2024-11-17 22:13:21.732897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:25.367 [2024-11-17 22:13:21.732950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:25.367 [2024-11-17 22:13:21.732979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:25.367 [2024-11-17 22:13:21.732987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:25.367 ===================================================== 00:13:25.367 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:25.367 ===================================================== 00:13:25.367 Controller Capabilities/Features 00:13:25.367 ================================ 00:13:25.367 Vendor ID: 4e58 00:13:25.367 Subsystem Vendor ID: 4e58 00:13:25.367 Serial Number: SPDK2 00:13:25.367 Model Number: SPDK bdev Controller 00:13:25.367 Firmware Version: 24.01.1 00:13:25.367 Recommended Arb Burst: 6 00:13:25.367 IEEE OUI Identifier: 8d 6b 50 00:13:25.367 Multi-path I/O 00:13:25.367 May have multiple subsystem ports: Yes 00:13:25.367 May have multiple controllers: Yes 00:13:25.367 Associated with SR-IOV VF: No 00:13:25.367 Max Data Transfer Size: 131072 00:13:25.367 Max Number of Namespaces: 32 00:13:25.367 Max Number of I/O Queues: 127 00:13:25.367 NVMe Specification Version (VS): 1.3 00:13:25.367 NVMe Specification Version (Identify): 1.3 00:13:25.367 Maximum Queue Entries: 256 00:13:25.367 Contiguous Queues Required: Yes 00:13:25.367 Arbitration Mechanisms Supported 00:13:25.367 Weighted Round Robin: Not Supported 00:13:25.367 Vendor Specific: Not Supported 00:13:25.367 Reset Timeout: 15000 ms 00:13:25.367 Doorbell Stride: 4 bytes 00:13:25.367 NVM Subsystem Reset: Not Supported 00:13:25.367 Command Sets Supported 00:13:25.367 NVM Command Set: Supported 00:13:25.367 Boot Partition: Not Supported 00:13:25.367 Memory Page Size Minimum: 4096 bytes 00:13:25.367 Memory Page Size Maximum: 4096 bytes 00:13:25.367 Persistent Memory Region: Not Supported 00:13:25.367 Optional Asynchronous Events Supported 00:13:25.367 Namespace Attribute Notices: Supported 00:13:25.367 Firmware Activation Notices: Not Supported 00:13:25.367 ANA Change Notices: Not Supported 00:13:25.367 PLE Aggregate Log Change Notices: Not Supported 00:13:25.367 LBA Status Info Alert Notices: Not Supported 00:13:25.367 EGE Aggregate Log Change Notices: Not Supported 00:13:25.367 Normal NVM Subsystem Shutdown event: Not Supported 00:13:25.367 Zone Descriptor Change Notices: Not Supported 00:13:25.367 Discovery Log Change Notices: Not Supported 00:13:25.367 Controller Attributes 00:13:25.367 128-bit Host Identifier: Supported 00:13:25.367 Non-Operational Permissive Mode: Not Supported 00:13:25.367 NVM Sets: Not Supported 00:13:25.367 Read Recovery Levels: Not Supported 00:13:25.367 Endurance Groups: Not Supported 00:13:25.367 Predictable Latency Mode: Not Supported 00:13:25.367 Traffic Based Keep ALive: Not Supported 00:13:25.367 Namespace Granularity: Not Supported 00:13:25.367 SQ Associations: Not Supported 00:13:25.367 UUID List: Not Supported 00:13:25.367 Multi-Domain Subsystem: Not Supported 00:13:25.367 Fixed Capacity Management: Not Supported 00:13:25.367 Variable Capacity Management: Not Supported 00:13:25.367 Delete Endurance Group: Not Supported 00:13:25.367 Delete NVM Set: Not Supported 00:13:25.367 Extended LBA Formats Supported: Not Supported 00:13:25.367 Flexible Data Placement Supported: Not Supported 00:13:25.367 00:13:25.367 Controller Memory Buffer Support 00:13:25.367 ================================ 00:13:25.367 Supported: No 00:13:25.367 00:13:25.367 Persistent Memory Region Support 00:13:25.367 ================================ 00:13:25.367 Supported: No 00:13:25.367 00:13:25.367 Admin Command Set Attributes 00:13:25.367 ============================ 00:13:25.367 Security Send/Receive: Not Supported 00:13:25.367 Format NVM: Not Supported 00:13:25.367 Firmware Activate/Download: Not Supported 00:13:25.367 Namespace Management: Not Supported 00:13:25.367 Device Self-Test: Not Supported 00:13:25.367 Directives: Not Supported 00:13:25.367 NVMe-MI: Not Supported 00:13:25.367 Virtualization Management: Not Supported 00:13:25.367 Doorbell Buffer Config: Not Supported 00:13:25.367 Get LBA Status Capability: Not Supported 00:13:25.367 Command & Feature Lockdown Capability: Not Supported 00:13:25.367 Abort Command Limit: 4 00:13:25.367 Async Event Request Limit: 4 00:13:25.367 Number of Firmware Slots: N/A 00:13:25.367 Firmware Slot 1 Read-Only: N/A 00:13:25.367 Firmware Activation Without Reset: N/A 00:13:25.367 Multiple Update Detection Support: N/A 00:13:25.367 Firmware Update Granularity: No Information Provided 00:13:25.367 Per-Namespace SMART Log: No 00:13:25.367 Asymmetric Namespace Access Log Page: Not Supported 00:13:25.367 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:25.367 Command Effects Log Page: Supported 00:13:25.367 Get Log Page Extended Data: Supported 00:13:25.367 Telemetry Log Pages: Not Supported 00:13:25.367 Persistent Event Log Pages: Not Supported 00:13:25.367 Supported Log Pages Log Page: May Support 00:13:25.367 Commands Supported & Effects Log Page: Not Supported 00:13:25.367 Feature Identifiers & Effects Log Page:May Support 00:13:25.368 NVMe-MI Commands & Effects Log Page: May Support 00:13:25.368 Data Area 4 for Telemetry Log: Not Supported 00:13:25.368 Error Log Page Entries Supported: 128 00:13:25.368 Keep Alive: Supported 00:13:25.368 Keep Alive Granularity: 10000 ms 00:13:25.368 00:13:25.368 NVM Command Set Attributes 00:13:25.368 ========================== 00:13:25.368 Submission Queue Entry Size 00:13:25.368 Max: 64 00:13:25.368 Min: 64 00:13:25.368 Completion Queue Entry Size 00:13:25.368 Max: 16 00:13:25.368 Min: 16 00:13:25.368 Number of Namespaces: 32 00:13:25.368 Compare Command: Supported 00:13:25.368 Write Uncorrectable Command: Not Supported 00:13:25.368 Dataset Management Command: Supported 00:13:25.368 Write Zeroes Command: Supported 00:13:25.368 Set Features Save Field: Not Supported 00:13:25.368 Reservations: Not Supported 00:13:25.368 Timestamp: Not Supported 00:13:25.368 Copy: Supported 00:13:25.368 Volatile Write Cache: Present 00:13:25.368 Atomic Write Unit (Normal): 1 00:13:25.368 Atomic Write Unit (PFail): 1 00:13:25.368 Atomic Compare & Write Unit: 1 00:13:25.368 Fused Compare & Write: Supported 00:13:25.368 Scatter-Gather List 00:13:25.368 SGL Command Set: Supported (Dword aligned) 00:13:25.368 SGL Keyed: Not Supported 00:13:25.368 SGL Bit Bucket Descriptor: Not Supported 00:13:25.368 SGL Metadata Pointer: Not Supported 00:13:25.368 Oversized SGL: Not Supported 00:13:25.368 SGL Metadata Address: Not Supported 00:13:25.368 SGL Offset: Not Supported 00:13:25.368 Transport SGL Data Block: Not Supported 00:13:25.368 Replay Protected Memory Block: Not Supported 00:13:25.368 00:13:25.368 Firmware Slot Information 00:13:25.368 ========================= 00:13:25.368 Active slot: 1 00:13:25.368 Slot 1 Firmware Revision: 24.01.1 00:13:25.368 00:13:25.368 00:13:25.368 Commands Supported and Effects 00:13:25.368 ============================== 00:13:25.368 Admin Commands 00:13:25.368 -------------- 00:13:25.368 Get Log Page (02h): Supported 00:13:25.368 Identify (06h): Supported 00:13:25.368 Abort (08h): Supported 00:13:25.368 Set Features (09h): Supported 00:13:25.368 Get Features (0Ah): Supported 00:13:25.368 Asynchronous Event Request (0Ch): Supported 00:13:25.368 Keep Alive (18h): Supported 00:13:25.368 I/O Commands 00:13:25.368 ------------ 00:13:25.368 Flush (00h): Supported LBA-Change 00:13:25.368 Write (01h): Supported LBA-Change 00:13:25.368 Read (02h): Supported 00:13:25.368 Compare (05h): Supported 00:13:25.368 Write Zeroes (08h): Supported LBA-Change 00:13:25.368 Dataset Management (09h): Supported LBA-Change 00:13:25.368 Copy (19h): Supported LBA-Change 00:13:25.368 Unknown (79h): Supported LBA-Change 00:13:25.368 Unknown (7Ah): Supported 00:13:25.368 00:13:25.368 Error Log 00:13:25.368 ========= 00:13:25.368 00:13:25.368 Arbitration 00:13:25.368 =========== 00:13:25.368 Arbitration Burst: 1 00:13:25.368 00:13:25.368 Power Management 00:13:25.368 ================ 00:13:25.368 Number of Power States: 1 00:13:25.368 Current Power State: Power State #0 00:13:25.368 Power State #0: 00:13:25.368 Max Power: 0.00 W 00:13:25.368 Non-Operational State: Operational 00:13:25.368 Entry Latency: Not Reported 00:13:25.368 Exit Latency: Not Reported 00:13:25.368 Relative Read Throughput: 0 00:13:25.368 Relative Read Latency: 0 00:13:25.368 Relative Write Throughput: 0 00:13:25.368 Relative Write Latency: 0 00:13:25.368 Idle Power: Not Reported 00:13:25.368 Active Power: Not Reported 00:13:25.368 Non-Operational Permissive Mode: Not Supported 00:13:25.368 00:13:25.368 Health Information 00:13:25.368 ================== 00:13:25.368 Critical Warnings: 00:13:25.368 Available Spare Space: OK 00:13:25.368 Temperature: OK 00:13:25.368 Device Reliability: OK 00:13:25.368 Read Only: No 00:13:25.368 Volatile Memory Backup: OK 00:13:25.368 Current Temperature: 0 Kelvin[2024-11-17 22:13:21.733107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:25.368 [2024-11-17 22:13:21.739941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:25.368 [2024-11-17 22:13:21.740009] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:25.368 [2024-11-17 22:13:21.740023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.368 [2024-11-17 22:13:21.740032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.368 [2024-11-17 22:13:21.740039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.368 [2024-11-17 22:13:21.740046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.368 [2024-11-17 22:13:21.740147] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:25.368 [2024-11-17 22:13:21.740166] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:25.368 [2024-11-17 22:13:21.741060] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:25.368 [2024-11-17 22:13:21.741094] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:25.368 [2024-11-17 22:13:21.741989] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:25.368 [2024-11-17 22:13:21.742030] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:25.368 [2024-11-17 22:13:21.742247] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:25.368 [2024-11-17 22:13:21.745872] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:25.368 (-273 Celsius) 00:13:25.368 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:25.368 Available Spare: 0% 00:13:25.368 Available Spare Threshold: 0% 00:13:25.368 Life Percentage Used: 0% 00:13:25.368 Data Units Read: 0 00:13:25.368 Data Units Written: 0 00:13:25.368 Host Read Commands: 0 00:13:25.368 Host Write Commands: 0 00:13:25.368 Controller Busy Time: 0 minutes 00:13:25.368 Power Cycles: 0 00:13:25.368 Power On Hours: 0 hours 00:13:25.368 Unsafe Shutdowns: 0 00:13:25.368 Unrecoverable Media Errors: 0 00:13:25.368 Lifetime Error Log Entries: 0 00:13:25.368 Warning Temperature Time: 0 minutes 00:13:25.368 Critical Temperature Time: 0 minutes 00:13:25.368 00:13:25.368 Number of Queues 00:13:25.368 ================ 00:13:25.368 Number of I/O Submission Queues: 127 00:13:25.368 Number of I/O Completion Queues: 127 00:13:25.368 00:13:25.368 Active Namespaces 00:13:25.368 ================= 00:13:25.368 Namespace ID:1 00:13:25.368 Error Recovery Timeout: Unlimited 00:13:25.368 Command Set Identifier: NVM (00h) 00:13:25.368 Deallocate: Supported 00:13:25.368 Deallocated/Unwritten Error: Not Supported 00:13:25.368 Deallocated Read Value: Unknown 00:13:25.368 Deallocate in Write Zeroes: Not Supported 00:13:25.368 Deallocated Guard Field: 0xFFFF 00:13:25.368 Flush: Supported 00:13:25.368 Reservation: Supported 00:13:25.369 Namespace Sharing Capabilities: Multiple Controllers 00:13:25.369 Size (in LBAs): 131072 (0GiB) 00:13:25.369 Capacity (in LBAs): 131072 (0GiB) 00:13:25.369 Utilization (in LBAs): 131072 (0GiB) 00:13:25.369 NGUID: 15CD31AAC3BA455E87DB66D1345DDE67 00:13:25.369 UUID: 15cd31aa-c3ba-455e-87db-66d1345dde67 00:13:25.369 Thin Provisioning: Not Supported 00:13:25.369 Per-NS Atomic Units: Yes 00:13:25.369 Atomic Boundary Size (Normal): 0 00:13:25.369 Atomic Boundary Size (PFail): 0 00:13:25.369 Atomic Boundary Offset: 0 00:13:25.369 Maximum Single Source Range Length: 65535 00:13:25.369 Maximum Copy Length: 65535 00:13:25.369 Maximum Source Range Count: 1 00:13:25.369 NGUID/EUI64 Never Reused: No 00:13:25.369 Namespace Write Protected: No 00:13:25.369 Number of LBA Formats: 1 00:13:25.369 Current LBA Format: LBA Format #00 00:13:25.369 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:25.369 00:13:25.369 22:13:21 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:30.711 Initializing NVMe Controllers 00:13:30.711 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:30.711 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:30.711 Initialization complete. Launching workers. 00:13:30.711 ======================================================== 00:13:30.711 Latency(us) 00:13:30.711 Device Information : IOPS MiB/s Average min max 00:13:30.711 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 38872.97 151.85 3292.48 1041.82 10744.78 00:13:30.711 ======================================================== 00:13:30.711 Total : 38872.97 151.85 3292.48 1041.82 10744.78 00:13:30.711 00:13:30.711 22:13:27 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:36.001 Initializing NVMe Controllers 00:13:36.001 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:36.001 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:36.001 Initialization complete. Launching workers. 00:13:36.001 ======================================================== 00:13:36.001 Latency(us) 00:13:36.001 Device Information : IOPS MiB/s Average min max 00:13:36.001 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 38683.10 151.11 3308.91 1018.76 10114.94 00:13:36.001 ======================================================== 00:13:36.001 Total : 38683.10 151.11 3308.91 1018.76 10114.94 00:13:36.001 00:13:36.001 22:13:32 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:42.568 Initializing NVMe Controllers 00:13:42.568 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:42.568 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:42.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:42.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:42.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:42.568 Initialization complete. Launching workers. 00:13:42.568 Starting thread on core 2 00:13:42.568 Starting thread on core 3 00:13:42.568 Starting thread on core 1 00:13:42.568 22:13:37 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:45.101 Initializing NVMe Controllers 00:13:45.101 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.101 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.101 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:45.101 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:45.101 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:45.101 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:45.101 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:45.101 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:45.101 Initialization complete. Launching workers. 00:13:45.101 Starting thread on core 1 with urgent priority queue 00:13:45.101 Starting thread on core 2 with urgent priority queue 00:13:45.101 Starting thread on core 3 with urgent priority queue 00:13:45.101 Starting thread on core 0 with urgent priority queue 00:13:45.101 SPDK bdev Controller (SPDK2 ) core 0: 5434.00 IO/s 18.40 secs/100000 ios 00:13:45.101 SPDK bdev Controller (SPDK2 ) core 1: 4337.67 IO/s 23.05 secs/100000 ios 00:13:45.101 SPDK bdev Controller (SPDK2 ) core 2: 4828.33 IO/s 20.71 secs/100000 ios 00:13:45.101 SPDK bdev Controller (SPDK2 ) core 3: 5024.67 IO/s 19.90 secs/100000 ios 00:13:45.101 ======================================================== 00:13:45.101 00:13:45.101 22:13:41 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:45.101 Initializing NVMe Controllers 00:13:45.101 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.101 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.101 Namespace ID: 1 size: 0GB 00:13:45.101 Initialization complete. 00:13:45.101 INFO: using host memory buffer for IO 00:13:45.101 Hello world! 00:13:45.101 22:13:41 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:46.478 Initializing NVMe Controllers 00:13:46.478 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:46.478 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:46.478 Initialization complete. Launching workers. 00:13:46.478 submit (in ns) avg, min, max = 6514.0, 3689.1, 4014175.5 00:13:46.478 complete (in ns) avg, min, max = 36179.3, 2063.6, 7073250.9 00:13:46.478 00:13:46.478 Submit histogram 00:13:46.478 ================ 00:13:46.478 Range in us Cumulative Count 00:13:46.478 3.680 - 3.695: 0.0304% ( 3) 00:13:46.478 3.695 - 3.709: 0.0405% ( 1) 00:13:46.478 3.709 - 3.724: 0.0506% ( 1) 00:13:46.478 3.724 - 3.753: 0.1315% ( 8) 00:13:46.478 3.753 - 3.782: 0.9611% ( 82) 00:13:46.478 3.782 - 3.811: 2.7115% ( 173) 00:13:46.478 3.811 - 3.840: 5.0081% ( 227) 00:13:46.478 3.840 - 3.869: 9.4395% ( 438) 00:13:46.478 3.869 - 3.898: 18.3225% ( 878) 00:13:46.478 3.898 - 3.927: 28.7232% ( 1028) 00:13:46.478 3.927 - 3.956: 37.8996% ( 907) 00:13:46.478 3.956 - 3.985: 46.4185% ( 842) 00:13:46.478 3.985 - 4.015: 55.8175% ( 929) 00:13:46.478 4.015 - 4.044: 63.0413% ( 714) 00:13:46.478 4.044 - 4.073: 67.7256% ( 463) 00:13:46.478 4.073 - 4.102: 72.4909% ( 471) 00:13:46.478 4.102 - 4.131: 76.0421% ( 351) 00:13:46.478 4.131 - 4.160: 78.5714% ( 250) 00:13:46.478 4.160 - 4.189: 81.2728% ( 267) 00:13:46.478 4.189 - 4.218: 83.5795% ( 228) 00:13:46.478 4.218 - 4.247: 85.7750% ( 217) 00:13:46.478 4.247 - 4.276: 87.8693% ( 207) 00:13:46.478 4.276 - 4.305: 89.8826% ( 199) 00:13:46.478 4.305 - 4.335: 91.7038% ( 180) 00:13:46.478 4.335 - 4.364: 93.3529% ( 163) 00:13:46.478 4.364 - 4.393: 94.5063% ( 114) 00:13:46.478 4.393 - 4.422: 95.3055% ( 79) 00:13:46.478 4.422 - 4.451: 95.8620% ( 55) 00:13:46.478 4.451 - 4.480: 96.3173% ( 45) 00:13:46.478 4.480 - 4.509: 96.7321% ( 41) 00:13:46.478 4.509 - 4.538: 96.9243% ( 19) 00:13:46.478 4.538 - 4.567: 97.0963% ( 17) 00:13:46.478 4.567 - 4.596: 97.3088% ( 21) 00:13:46.478 4.596 - 4.625: 97.4605% ( 15) 00:13:46.478 4.625 - 4.655: 97.5718% ( 11) 00:13:46.478 4.655 - 4.684: 97.7337% ( 16) 00:13:46.478 4.684 - 4.713: 97.8146% ( 8) 00:13:46.478 4.713 - 4.742: 97.9361% ( 12) 00:13:46.478 4.742 - 4.771: 98.0170% ( 8) 00:13:46.478 4.771 - 4.800: 98.0777% ( 6) 00:13:46.478 4.800 - 4.829: 98.1789% ( 10) 00:13:46.478 4.829 - 4.858: 98.2295% ( 5) 00:13:46.478 4.858 - 4.887: 98.3104% ( 8) 00:13:46.478 4.887 - 4.916: 98.3812% ( 7) 00:13:46.478 4.916 - 4.945: 98.4419% ( 6) 00:13:46.478 4.945 - 4.975: 98.5026% ( 6) 00:13:46.478 4.975 - 5.004: 98.5229% ( 2) 00:13:46.478 5.004 - 5.033: 98.5633% ( 4) 00:13:46.478 5.033 - 5.062: 98.5735% ( 1) 00:13:46.478 5.062 - 5.091: 98.6038% ( 3) 00:13:46.478 5.091 - 5.120: 98.6139% ( 1) 00:13:46.478 5.120 - 5.149: 98.6342% ( 2) 00:13:46.478 5.149 - 5.178: 98.6847% ( 5) 00:13:46.478 5.178 - 5.207: 98.7151% ( 3) 00:13:46.478 5.207 - 5.236: 98.7252% ( 1) 00:13:46.478 5.236 - 5.265: 98.7353% ( 1) 00:13:46.478 5.265 - 5.295: 98.7454% ( 1) 00:13:46.478 5.324 - 5.353: 98.7556% ( 1) 00:13:46.478 5.411 - 5.440: 98.7657% ( 1) 00:13:46.478 5.585 - 5.615: 98.7758% ( 1) 00:13:46.478 5.673 - 5.702: 98.7859% ( 1) 00:13:46.478 5.702 - 5.731: 98.7960% ( 1) 00:13:46.478 7.215 - 7.244: 98.8062% ( 1) 00:13:46.478 8.611 - 8.669: 98.8163% ( 1) 00:13:46.478 8.669 - 8.727: 98.8264% ( 1) 00:13:46.478 8.727 - 8.785: 98.8567% ( 3) 00:13:46.478 8.844 - 8.902: 98.8669% ( 1) 00:13:46.478 8.902 - 8.960: 98.8972% ( 3) 00:13:46.478 8.960 - 9.018: 98.9073% ( 1) 00:13:46.478 9.018 - 9.076: 98.9276% ( 2) 00:13:46.478 9.076 - 9.135: 98.9478% ( 2) 00:13:46.478 9.135 - 9.193: 98.9579% ( 1) 00:13:46.478 9.309 - 9.367: 98.9680% ( 1) 00:13:46.478 9.367 - 9.425: 98.9781% ( 1) 00:13:46.478 9.425 - 9.484: 98.9883% ( 1) 00:13:46.478 9.484 - 9.542: 98.9984% ( 1) 00:13:46.478 9.716 - 9.775: 99.0186% ( 2) 00:13:46.478 9.891 - 9.949: 99.0287% ( 1) 00:13:46.478 9.949 - 10.007: 99.0490% ( 2) 00:13:46.478 10.007 - 10.065: 99.0591% ( 1) 00:13:46.478 10.065 - 10.124: 99.0692% ( 1) 00:13:46.478 10.124 - 10.182: 99.0894% ( 2) 00:13:46.478 10.356 - 10.415: 99.0996% ( 1) 00:13:46.478 10.415 - 10.473: 99.1097% ( 1) 00:13:46.478 10.647 - 10.705: 99.1198% ( 1) 00:13:46.478 10.822 - 10.880: 99.1299% ( 1) 00:13:46.478 11.404 - 11.462: 99.1400% ( 1) 00:13:46.478 11.578 - 11.636: 99.1501% ( 1) 00:13:46.478 13.440 - 13.498: 99.1603% ( 1) 00:13:46.478 13.498 - 13.556: 99.1704% ( 1) 00:13:46.478 14.138 - 14.196: 99.1805% ( 1) 00:13:46.478 14.196 - 14.255: 99.1906% ( 1) 00:13:46.478 14.895 - 15.011: 99.2007% ( 1) 00:13:46.478 15.244 - 15.360: 99.2108% ( 1) 00:13:46.478 15.476 - 15.593: 99.2210% ( 1) 00:13:46.478 15.709 - 15.825: 99.2311% ( 1) 00:13:46.478 15.825 - 15.942: 99.2412% ( 1) 00:13:46.478 15.942 - 16.058: 99.2513% ( 1) 00:13:46.478 17.920 - 18.036: 99.2614% ( 1) 00:13:46.478 18.036 - 18.153: 99.2715% ( 1) 00:13:46.478 18.385 - 18.502: 99.3120% ( 4) 00:13:46.478 18.502 - 18.618: 99.3525% ( 4) 00:13:46.478 18.618 - 18.735: 99.4638% ( 11) 00:13:46.478 18.735 - 18.851: 99.5042% ( 4) 00:13:46.478 18.851 - 18.967: 99.5346% ( 3) 00:13:46.478 18.967 - 19.084: 99.5751% ( 4) 00:13:46.478 19.084 - 19.200: 99.5852% ( 1) 00:13:46.478 19.200 - 19.316: 99.6155% ( 3) 00:13:46.478 19.316 - 19.433: 99.6459% ( 3) 00:13:46.478 19.549 - 19.665: 99.6864% ( 4) 00:13:46.478 19.665 - 19.782: 99.7471% ( 6) 00:13:46.478 19.782 - 19.898: 99.7572% ( 1) 00:13:46.478 19.898 - 20.015: 99.7774% ( 2) 00:13:46.478 20.131 - 20.247: 99.8179% ( 4) 00:13:46.478 20.247 - 20.364: 99.8482% ( 3) 00:13:46.478 20.364 - 20.480: 99.8685% ( 2) 00:13:46.478 20.480 - 20.596: 99.8887% ( 2) 00:13:46.478 20.596 - 20.713: 99.8988% ( 1) 00:13:46.478 20.829 - 20.945: 99.9089% ( 1) 00:13:46.478 28.975 - 29.091: 99.9191% ( 1) 00:13:46.478 35.607 - 35.840: 99.9292% ( 1) 00:13:46.478 48.407 - 48.640: 99.9393% ( 1) 00:13:46.478 3053.382 - 3068.276: 99.9494% ( 1) 00:13:46.478 3961.949 - 3991.738: 99.9595% ( 1) 00:13:46.478 3991.738 - 4021.527: 100.0000% ( 4) 00:13:46.478 00:13:46.478 Complete histogram 00:13:46.478 ================== 00:13:46.478 Range in us Cumulative Count 00:13:46.478 2.051 - 2.065: 0.0101% ( 1) 00:13:46.478 2.065 - 2.080: 0.7588% ( 74) 00:13:46.478 2.080 - 2.095: 5.5038% ( 469) 00:13:46.478 2.095 - 2.109: 9.5103% ( 396) 00:13:46.478 2.109 - 2.124: 12.1813% ( 264) 00:13:46.478 2.124 - 2.138: 16.6329% ( 440) 00:13:46.478 2.138 - 2.153: 29.5528% ( 1277) 00:13:46.478 2.153 - 2.167: 52.8329% ( 2301) 00:13:46.478 2.167 - 2.182: 61.5641% ( 863) 00:13:46.478 2.182 - 2.196: 64.8725% ( 327) 00:13:46.479 2.196 - 2.211: 69.0510% ( 413) 00:13:46.479 2.211 - 2.225: 76.7200% ( 758) 00:13:46.479 2.225 - 2.240: 82.5880% ( 580) 00:13:46.479 2.240 - 2.255: 85.1072% ( 249) 00:13:46.479 2.255 - 2.269: 85.9065% ( 79) 00:13:46.479 2.269 - 2.284: 87.8794% ( 195) 00:13:46.479 2.284 - 2.298: 90.1356% ( 223) 00:13:46.479 2.298 - 2.313: 92.0174% ( 186) 00:13:46.479 2.313 - 2.327: 92.5840% ( 56) 00:13:46.479 2.327 - 2.342: 92.7661% ( 18) 00:13:46.479 2.342 - 2.356: 93.9093% ( 113) 00:13:46.479 2.356 - 2.371: 95.4573% ( 153) 00:13:46.479 2.371 - 2.385: 96.4690% ( 100) 00:13:46.479 2.385 - 2.400: 96.8839% ( 41) 00:13:46.479 2.400 - 2.415: 97.0558% ( 17) 00:13:46.479 2.415 - 2.429: 97.1671% ( 11) 00:13:46.479 2.429 - 2.444: 97.3796% ( 21) 00:13:46.479 2.444 - 2.458: 97.5314% ( 15) 00:13:46.479 2.458 - 2.473: 97.5820% ( 5) 00:13:46.479 2.473 - 2.487: 97.6730% ( 9) 00:13:46.479 2.487 - 2.502: 97.7236% ( 5) 00:13:46.479 2.502 - 2.516: 97.7438% ( 2) 00:13:46.479 2.516 - 2.531: 97.7742% ( 3) 00:13:46.479 2.545 - 2.560: 97.7944% ( 2) 00:13:46.479 2.560 - 2.575: 97.8248% ( 3) 00:13:46.479 2.575 - 2.589: 97.8349% ( 1) 00:13:46.479 2.589 - 2.604: 97.8551% ( 2) 00:13:46.479 2.618 - 2.633: 97.8652% ( 1) 00:13:46.479 2.676 - 2.691: 97.8754% ( 1) 00:13:46.479 2.778 - 2.793: 97.8855% ( 1) 00:13:46.479 2.880 - 2.895: 97.8956% ( 1) 00:13:46.479 3.113 - 3.127: 97.9057% ( 1) 00:13:46.479 3.287 - 3.302: 97.9158% ( 1) 00:13:46.479 3.302 - 3.316: 97.9361% ( 2) 00:13:46.479 3.316 - 3.331: 97.9462% ( 1) 00:13:46.479 3.331 - 3.345: 97.9563% ( 1) 00:13:46.479 3.360 - 3.375: 97.9968% ( 4) 00:13:46.479 3.389 - 3.404: 98.0069% ( 1) 00:13:46.479 3.433 - 3.447: 98.0372% ( 3) 00:13:46.479 3.476 - 3.491: 98.0473% ( 1) 00:13:46.479 3.491 - 3.505: 98.0676% ( 2) 00:13:46.479 3.520 - 3.535: 98.0878% ( 2) 00:13:46.479 3.535 - 3.549: 98.1081% ( 2) 00:13:46.479 3.564 - 3.578: 98.1182% ( 1) 00:13:46.479 3.578 - 3.593: 98.1283% ( 1) 00:13:46.479 3.593 - 3.607: 98.1586% ( 3) 00:13:46.479 3.607 - 3.622: 98.1789% ( 2) 00:13:46.479 3.724 - 3.753: 98.1890% ( 1) 00:13:46.479 3.782 - 3.811: 98.1991% ( 1) 00:13:46.479 3.811 - 3.840: 98.2092% ( 1) 00:13:46.479 3.985 - 4.015: 98.2193% ( 1) 00:13:46.479 4.044 - 4.073: 98.2295% ( 1) 00:13:46.479 4.335 - 4.364: 98.2396% ( 1) 00:13:46.479 4.451 - 4.480: 98.2497% ( 1) 00:13:46.479 4.596 - 4.625: 98.2598% ( 1) 00:13:46.479 4.655 - 4.684: 98.2699% ( 1) 00:13:46.479 6.865 - 6.895: 98.2800% ( 1) 00:13:46.479 7.011 - 7.040: 98.2902% ( 1) 00:13:46.479 7.127 - 7.156: 98.3003% ( 1) 00:13:46.479 7.156 - 7.185: 98.3104% ( 1) 00:13:46.479 7.185 - 7.215: 98.3205% ( 1) 00:13:46.479 7.244 - 7.273: 98.3306% ( 1) 00:13:46.479 7.273 - 7.302: 98.3509% ( 2) 00:13:46.479 7.505 - 7.564: 98.3812% ( 3) 00:13:46.479 7.564 - 7.622: 98.3913% ( 1) 00:13:46.479 7.971 - 8.029: 98.4015% ( 1) 00:13:46.479 8.029 - 8.087: 98.4116% ( 1) 00:13:46.479 8.087 - 8.145: 98.4217% ( 1) 00:13:46.479 8.320 - 8.378: 98.4318% ( 1) 00:13:46.479 8.669 - 8.727: 98.4520% ( 2) 00:13:46.479 8.727 - 8.785: 98.4622% ( 1) 00:13:46.479 9.018 - 9.076: 98.4723% ( 1) 00:13:46.479 9.367 - 9.425: 98.4824% ( 1) 00:13:46.479 9.775 - 9.833: 98.4925% ( 1) 00:13:46.479 12.567 - 12.625: 98.5026% ( 1) 00:13:46.479 14.196 - 14.255: 98.5127% ( 1) 00:13:46.479 16.524 - 16.640: 98.5330% ( 2) 00:13:46.479 16.640 - 16.756: 98.5431% ( 1) 00:13:46.479 16.756 - 16.873: 98.5937% ( 5) 00:13:46.479 16.873 - 16.989: 98.6746% ( 8) 00:13:46.479 16.989 - 17.105: 98.7454% ( 7) 00:13:46.479 17.105 - 17.222: 98.7960% ( 5) 00:13:46.479 17.222 - 17.338: 98.8466% ( 5) 00:13:46.479 17.338 - 17.455: 98.8567% ( 1) 00:13:46.479 17.687 - 17.804: 98.8669% ( 1) 00:13:46.479 17.804 - 17.920: 98.8972% ( 3) 00:13:46.479 17.920 - 18.036: 98.9377% ( 4) 00:13:46.479 18.036 - 18.153: 98.9984% ( 6) 00:13:46.479 18.153 - 18.269: 99.0389% ( 4) 00:13:46.479 18.269 - 18.385: 99.0591% ( 2) 00:13:46.479 18.385 - 18.502: 99.0996% ( 4) 00:13:46.479 18.618 - 18.735: 99.1198% ( 2) 00:13:46.479 18.735 - 18.851: 99.1501% ( 3) 00:13:46.479 18.851 - 18.967: 99.1704% ( 2) 00:13:46.479 28.044 - 28.160: 99.1805% ( 1) 00:13:46.479 3023.593 - 3038.487: 99.2007% ( 2) 00:13:46.479 3902.371 - 3932.160: 99.2210% ( 2) 00:13:46.479 3932.160 - 3961.949: 99.2311% ( 1) 00:13:46.479 3961.949 - 3991.738: 99.3221% ( 9) 00:13:46.479 3991.738 - 4021.527: 99.7774% ( 45) 00:13:46.479 4021.527 - 4051.316: 99.9292% ( 15) 00:13:46.479 4051.316 - 4081.105: 99.9494% ( 2) 00:13:46.479 4081.105 - 4110.895: 99.9595% ( 1) 00:13:46.479 7000.436 - 7030.225: 99.9798% ( 2) 00:13:46.479 7030.225 - 7060.015: 99.9899% ( 1) 00:13:46.479 7060.015 - 7089.804: 100.0000% ( 1) 00:13:46.479 00:13:46.479 22:13:43 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:46.479 22:13:43 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:46.479 22:13:43 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:46.479 22:13:43 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:46.479 22:13:43 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:46.738 [ 00:13:46.738 { 00:13:46.738 "allow_any_host": true, 00:13:46.738 "hosts": [], 00:13:46.738 "listen_addresses": [], 00:13:46.738 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:46.738 "subtype": "Discovery" 00:13:46.738 }, 00:13:46.738 { 00:13:46.738 "allow_any_host": true, 00:13:46.738 "hosts": [], 00:13:46.738 "listen_addresses": [ 00:13:46.738 { 00:13:46.738 "adrfam": "IPv4", 00:13:46.738 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:46.738 "transport": "VFIOUSER", 00:13:46.738 "trsvcid": "0", 00:13:46.738 "trtype": "VFIOUSER" 00:13:46.738 } 00:13:46.738 ], 00:13:46.738 "max_cntlid": 65519, 00:13:46.738 "max_namespaces": 32, 00:13:46.738 "min_cntlid": 1, 00:13:46.738 "model_number": "SPDK bdev Controller", 00:13:46.738 "namespaces": [ 00:13:46.738 { 00:13:46.738 "bdev_name": "Malloc1", 00:13:46.738 "name": "Malloc1", 00:13:46.738 "nguid": "235BFBE0E25D47148A35DA37B1DD850D", 00:13:46.738 "nsid": 1, 00:13:46.738 "uuid": "235bfbe0-e25d-4714-8a35-da37b1dd850d" 00:13:46.738 }, 00:13:46.738 { 00:13:46.738 "bdev_name": "Malloc3", 00:13:46.738 "name": "Malloc3", 00:13:46.738 "nguid": "9AC5E237AA5C49E489C662DE8E61891B", 00:13:46.738 "nsid": 2, 00:13:46.738 "uuid": "9ac5e237-aa5c-49e4-89c6-62de8e61891b" 00:13:46.738 } 00:13:46.738 ], 00:13:46.738 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:46.738 "serial_number": "SPDK1", 00:13:46.738 "subtype": "NVMe" 00:13:46.738 }, 00:13:46.738 { 00:13:46.738 "allow_any_host": true, 00:13:46.738 "hosts": [], 00:13:46.738 "listen_addresses": [ 00:13:46.738 { 00:13:46.738 "adrfam": "IPv4", 00:13:46.738 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:46.738 "transport": "VFIOUSER", 00:13:46.738 "trsvcid": "0", 00:13:46.738 "trtype": "VFIOUSER" 00:13:46.738 } 00:13:46.738 ], 00:13:46.738 "max_cntlid": 65519, 00:13:46.738 "max_namespaces": 32, 00:13:46.738 "min_cntlid": 1, 00:13:46.738 "model_number": "SPDK bdev Controller", 00:13:46.738 "namespaces": [ 00:13:46.738 { 00:13:46.738 "bdev_name": "Malloc2", 00:13:46.738 "name": "Malloc2", 00:13:46.739 "nguid": "15CD31AAC3BA455E87DB66D1345DDE67", 00:13:46.739 "nsid": 1, 00:13:46.739 "uuid": "15cd31aa-c3ba-455e-87db-66d1345dde67" 00:13:46.739 } 00:13:46.739 ], 00:13:46.739 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:46.739 "serial_number": "SPDK2", 00:13:46.739 "subtype": "NVMe" 00:13:46.739 } 00:13:46.739 ] 00:13:46.997 22:13:43 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:46.997 22:13:43 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:46.997 22:13:43 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71449 00:13:46.997 22:13:43 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:46.998 22:13:43 -- common/autotest_common.sh@1254 -- # local i=0 00:13:46.998 22:13:43 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:46.998 22:13:43 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:13:46.998 22:13:43 -- common/autotest_common.sh@1257 -- # i=1 00:13:46.998 22:13:43 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:46.998 22:13:43 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:46.998 22:13:43 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:13:46.998 22:13:43 -- common/autotest_common.sh@1257 -- # i=2 00:13:46.998 22:13:43 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:46.998 22:13:43 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:46.998 22:13:43 -- common/autotest_common.sh@1256 -- # '[' 2 -lt 200 ']' 00:13:46.998 22:13:43 -- common/autotest_common.sh@1257 -- # i=3 00:13:46.998 22:13:43 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:47.256 22:13:43 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:47.256 22:13:43 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:47.256 22:13:43 -- common/autotest_common.sh@1265 -- # return 0 00:13:47.256 22:13:43 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:47.256 22:13:43 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:47.516 Malloc4 00:13:47.516 22:13:44 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:47.789 22:13:44 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:47.789 Asynchronous Event Request test 00:13:47.789 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:47.789 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:47.789 Registering asynchronous event callbacks... 00:13:47.789 Starting namespace attribute notice tests for all controllers... 00:13:47.789 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:47.789 aer_cb - Changed Namespace 00:13:47.789 Cleaning up... 00:13:48.075 [ 00:13:48.075 { 00:13:48.075 "allow_any_host": true, 00:13:48.075 "hosts": [], 00:13:48.075 "listen_addresses": [], 00:13:48.075 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:48.075 "subtype": "Discovery" 00:13:48.075 }, 00:13:48.075 { 00:13:48.075 "allow_any_host": true, 00:13:48.075 "hosts": [], 00:13:48.075 "listen_addresses": [ 00:13:48.075 { 00:13:48.075 "adrfam": "IPv4", 00:13:48.075 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:48.075 "transport": "VFIOUSER", 00:13:48.075 "trsvcid": "0", 00:13:48.075 "trtype": "VFIOUSER" 00:13:48.075 } 00:13:48.075 ], 00:13:48.075 "max_cntlid": 65519, 00:13:48.075 "max_namespaces": 32, 00:13:48.075 "min_cntlid": 1, 00:13:48.075 "model_number": "SPDK bdev Controller", 00:13:48.075 "namespaces": [ 00:13:48.075 { 00:13:48.075 "bdev_name": "Malloc1", 00:13:48.075 "name": "Malloc1", 00:13:48.075 "nguid": "235BFBE0E25D47148A35DA37B1DD850D", 00:13:48.075 "nsid": 1, 00:13:48.075 "uuid": "235bfbe0-e25d-4714-8a35-da37b1dd850d" 00:13:48.075 }, 00:13:48.075 { 00:13:48.075 "bdev_name": "Malloc3", 00:13:48.075 "name": "Malloc3", 00:13:48.075 "nguid": "9AC5E237AA5C49E489C662DE8E61891B", 00:13:48.075 "nsid": 2, 00:13:48.075 "uuid": "9ac5e237-aa5c-49e4-89c6-62de8e61891b" 00:13:48.075 } 00:13:48.075 ], 00:13:48.075 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:48.075 "serial_number": "SPDK1", 00:13:48.075 "subtype": "NVMe" 00:13:48.075 }, 00:13:48.075 { 00:13:48.075 "allow_any_host": true, 00:13:48.075 "hosts": [], 00:13:48.075 "listen_addresses": [ 00:13:48.075 { 00:13:48.075 "adrfam": "IPv4", 00:13:48.075 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:48.075 "transport": "VFIOUSER", 00:13:48.075 "trsvcid": "0", 00:13:48.075 "trtype": "VFIOUSER" 00:13:48.075 } 00:13:48.075 ], 00:13:48.075 "max_cntlid": 65519, 00:13:48.075 "max_namespaces": 32, 00:13:48.075 "min_cntlid": 1, 00:13:48.075 "model_number": "SPDK bdev Controller", 00:13:48.075 "namespaces": [ 00:13:48.075 { 00:13:48.075 "bdev_name": "Malloc2", 00:13:48.075 "name": "Malloc2", 00:13:48.075 "nguid": "15CD31AAC3BA455E87DB66D1345DDE67", 00:13:48.075 "nsid": 1, 00:13:48.075 "uuid": "15cd31aa-c3ba-455e-87db-66d1345dde67" 00:13:48.075 }, 00:13:48.075 { 00:13:48.075 "bdev_name": "Malloc4", 00:13:48.075 "name": "Malloc4", 00:13:48.075 "nguid": "9D7515D212E4415ABDC604E612A94C54", 00:13:48.075 "nsid": 2, 00:13:48.075 "uuid": "9d7515d2-12e4-415a-bdc6-04e612a94c54" 00:13:48.075 } 00:13:48.075 ], 00:13:48.075 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:48.075 "serial_number": "SPDK2", 00:13:48.075 "subtype": "NVMe" 00:13:48.075 } 00:13:48.075 ] 00:13:48.075 22:13:44 -- target/nvmf_vfio_user.sh@44 -- # wait 71449 00:13:48.075 22:13:44 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:48.075 22:13:44 -- target/nvmf_vfio_user.sh@95 -- # killprocess 70770 00:13:48.075 22:13:44 -- common/autotest_common.sh@936 -- # '[' -z 70770 ']' 00:13:48.075 22:13:44 -- common/autotest_common.sh@940 -- # kill -0 70770 00:13:48.075 22:13:44 -- common/autotest_common.sh@941 -- # uname 00:13:48.075 22:13:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:48.075 22:13:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70770 00:13:48.075 killing process with pid 70770 00:13:48.075 22:13:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:48.075 22:13:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:48.075 22:13:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70770' 00:13:48.075 22:13:44 -- common/autotest_common.sh@955 -- # kill 70770 00:13:48.075 [2024-11-17 22:13:44.644977] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:48.075 22:13:44 -- common/autotest_common.sh@960 -- # wait 70770 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:48.654 Process pid: 71498 00:13:48.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=71498 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 71498' 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 71498 00:13:48.654 22:13:45 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:48.654 22:13:45 -- common/autotest_common.sh@829 -- # '[' -z 71498 ']' 00:13:48.654 22:13:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.654 22:13:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.654 22:13:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.654 22:13:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.654 22:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:48.655 [2024-11-17 22:13:45.224114] thread.c:2929:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:48.655 [2024-11-17 22:13:45.225202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:48.655 [2024-11-17 22:13:45.225278] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.913 [2024-11-17 22:13:45.360238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.913 [2024-11-17 22:13:45.495394] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:48.913 [2024-11-17 22:13:45.495546] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.913 [2024-11-17 22:13:45.495558] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.913 [2024-11-17 22:13:45.495566] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.913 [2024-11-17 22:13:45.495728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.913 [2024-11-17 22:13:45.495865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.913 [2024-11-17 22:13:45.495984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.913 [2024-11-17 22:13:45.495990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.173 [2024-11-17 22:13:45.618223] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:49.173 [2024-11-17 22:13:45.624959] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:49.173 [2024-11-17 22:13:45.625140] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:49.173 [2024-11-17 22:13:45.626273] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:49.173 [2024-11-17 22:13:45.626413] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:49.740 22:13:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.740 22:13:46 -- common/autotest_common.sh@862 -- # return 0 00:13:49.740 22:13:46 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:50.676 22:13:47 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:50.935 22:13:47 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:50.935 22:13:47 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:50.935 22:13:47 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:50.935 22:13:47 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:50.935 22:13:47 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:51.503 Malloc1 00:13:51.503 22:13:47 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:51.503 22:13:48 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:52.071 22:13:48 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:52.071 22:13:48 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:52.071 22:13:48 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:52.071 22:13:48 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:52.329 Malloc2 00:13:52.588 22:13:48 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:52.846 22:13:49 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:53.105 22:13:49 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:53.364 22:13:49 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:53.364 22:13:49 -- target/nvmf_vfio_user.sh@95 -- # killprocess 71498 00:13:53.364 22:13:49 -- common/autotest_common.sh@936 -- # '[' -z 71498 ']' 00:13:53.364 22:13:49 -- common/autotest_common.sh@940 -- # kill -0 71498 00:13:53.364 22:13:49 -- common/autotest_common.sh@941 -- # uname 00:13:53.364 22:13:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:53.364 22:13:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71498 00:13:53.364 killing process with pid 71498 00:13:53.364 22:13:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:53.364 22:13:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:53.364 22:13:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71498' 00:13:53.364 22:13:49 -- common/autotest_common.sh@955 -- # kill 71498 00:13:53.364 22:13:49 -- common/autotest_common.sh@960 -- # wait 71498 00:13:53.934 22:13:50 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:53.934 22:13:50 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:53.934 00:13:53.934 real 0m56.096s 00:13:53.934 user 3m40.503s 00:13:53.934 sys 0m3.705s 00:13:53.934 ************************************ 00:13:53.934 END TEST nvmf_vfio_user 00:13:53.934 ************************************ 00:13:53.934 22:13:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:53.934 22:13:50 -- common/autotest_common.sh@10 -- # set +x 00:13:53.934 22:13:50 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:53.934 22:13:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:53.934 22:13:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.934 22:13:50 -- common/autotest_common.sh@10 -- # set +x 00:13:53.934 ************************************ 00:13:53.934 START TEST nvmf_vfio_user_nvme_compliance 00:13:53.934 ************************************ 00:13:53.934 22:13:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:53.934 * Looking for test storage... 00:13:53.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:13:53.934 22:13:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:53.934 22:13:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:53.934 22:13:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:53.934 22:13:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:53.934 22:13:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:53.934 22:13:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:53.934 22:13:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:53.934 22:13:50 -- scripts/common.sh@335 -- # IFS=.-: 00:13:53.934 22:13:50 -- scripts/common.sh@335 -- # read -ra ver1 00:13:53.934 22:13:50 -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.934 22:13:50 -- scripts/common.sh@336 -- # read -ra ver2 00:13:53.934 22:13:50 -- scripts/common.sh@337 -- # local 'op=<' 00:13:53.934 22:13:50 -- scripts/common.sh@339 -- # ver1_l=2 00:13:53.934 22:13:50 -- scripts/common.sh@340 -- # ver2_l=1 00:13:53.934 22:13:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:53.934 22:13:50 -- scripts/common.sh@343 -- # case "$op" in 00:13:53.934 22:13:50 -- scripts/common.sh@344 -- # : 1 00:13:53.934 22:13:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:53.934 22:13:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.934 22:13:50 -- scripts/common.sh@364 -- # decimal 1 00:13:53.934 22:13:50 -- scripts/common.sh@352 -- # local d=1 00:13:53.934 22:13:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.934 22:13:50 -- scripts/common.sh@354 -- # echo 1 00:13:53.934 22:13:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:53.934 22:13:50 -- scripts/common.sh@365 -- # decimal 2 00:13:53.934 22:13:50 -- scripts/common.sh@352 -- # local d=2 00:13:53.934 22:13:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.934 22:13:50 -- scripts/common.sh@354 -- # echo 2 00:13:53.934 22:13:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:53.934 22:13:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:53.934 22:13:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:53.934 22:13:50 -- scripts/common.sh@367 -- # return 0 00:13:53.934 22:13:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.934 22:13:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:53.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.934 --rc genhtml_branch_coverage=1 00:13:53.934 --rc genhtml_function_coverage=1 00:13:53.934 --rc genhtml_legend=1 00:13:53.934 --rc geninfo_all_blocks=1 00:13:53.934 --rc geninfo_unexecuted_blocks=1 00:13:53.934 00:13:53.934 ' 00:13:53.934 22:13:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:53.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.934 --rc genhtml_branch_coverage=1 00:13:53.934 --rc genhtml_function_coverage=1 00:13:53.934 --rc genhtml_legend=1 00:13:53.934 --rc geninfo_all_blocks=1 00:13:53.934 --rc geninfo_unexecuted_blocks=1 00:13:53.934 00:13:53.934 ' 00:13:53.934 22:13:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:53.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.934 --rc genhtml_branch_coverage=1 00:13:53.934 --rc genhtml_function_coverage=1 00:13:53.934 --rc genhtml_legend=1 00:13:53.934 --rc geninfo_all_blocks=1 00:13:53.934 --rc geninfo_unexecuted_blocks=1 00:13:53.934 00:13:53.934 ' 00:13:53.934 22:13:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:53.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.934 --rc genhtml_branch_coverage=1 00:13:53.934 --rc genhtml_function_coverage=1 00:13:53.934 --rc genhtml_legend=1 00:13:53.934 --rc geninfo_all_blocks=1 00:13:53.934 --rc geninfo_unexecuted_blocks=1 00:13:53.934 00:13:53.934 ' 00:13:53.934 22:13:50 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:53.934 22:13:50 -- nvmf/common.sh@7 -- # uname -s 00:13:53.934 22:13:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.934 22:13:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.934 22:13:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.934 22:13:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.934 22:13:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.934 22:13:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.934 22:13:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.934 22:13:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.934 22:13:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.934 22:13:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.934 22:13:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:13:53.934 22:13:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:13:53.934 22:13:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.934 22:13:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.934 22:13:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:53.934 22:13:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:53.934 22:13:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.934 22:13:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.934 22:13:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.934 22:13:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.934 22:13:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.934 22:13:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.934 22:13:50 -- paths/export.sh@5 -- # export PATH 00:13:53.935 22:13:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.935 22:13:50 -- nvmf/common.sh@46 -- # : 0 00:13:53.935 22:13:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:53.935 22:13:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:53.935 22:13:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:53.935 22:13:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.935 22:13:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.935 22:13:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:53.935 22:13:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:53.935 22:13:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:53.935 22:13:50 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:53.935 22:13:50 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:53.935 22:13:50 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:53.935 22:13:50 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:53.935 22:13:50 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:53.935 22:13:50 -- compliance/compliance.sh@20 -- # nvmfpid=71703 00:13:53.935 22:13:50 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:53.935 Process pid: 71703 00:13:53.935 22:13:50 -- compliance/compliance.sh@21 -- # echo 'Process pid: 71703' 00:13:53.935 22:13:50 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:53.935 22:13:50 -- compliance/compliance.sh@24 -- # waitforlisten 71703 00:13:53.935 22:13:50 -- common/autotest_common.sh@829 -- # '[' -z 71703 ']' 00:13:53.935 22:13:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.935 22:13:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.935 22:13:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.935 22:13:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.935 22:13:50 -- common/autotest_common.sh@10 -- # set +x 00:13:54.194 [2024-11-17 22:13:50.568825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:54.194 [2024-11-17 22:13:50.568926] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.194 [2024-11-17 22:13:50.702605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:54.454 [2024-11-17 22:13:50.850833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:54.454 [2024-11-17 22:13:50.850999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.454 [2024-11-17 22:13:50.851012] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.454 [2024-11-17 22:13:50.851020] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.454 [2024-11-17 22:13:50.851208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.454 [2024-11-17 22:13:50.851361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.454 [2024-11-17 22:13:50.851369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.022 22:13:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.022 22:13:51 -- common/autotest_common.sh@862 -- # return 0 00:13:55.022 22:13:51 -- compliance/compliance.sh@26 -- # sleep 1 00:13:55.958 22:13:52 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:55.958 22:13:52 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:55.958 22:13:52 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:55.958 22:13:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.958 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:13:56.217 22:13:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.217 22:13:52 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:56.217 22:13:52 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:56.217 22:13:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.217 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:13:56.217 malloc0 00:13:56.218 22:13:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.218 22:13:52 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:56.218 22:13:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.218 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:13:56.218 22:13:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.218 22:13:52 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:56.218 22:13:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.218 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:13:56.218 22:13:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.218 22:13:52 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:56.218 22:13:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.218 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:13:56.218 22:13:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.218 22:13:52 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:56.477 00:13:56.477 00:13:56.477 CUnit - A unit testing framework for C - Version 2.1-3 00:13:56.477 http://cunit.sourceforge.net/ 00:13:56.477 00:13:56.477 00:13:56.477 Suite: nvme_compliance 00:13:56.477 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-17 22:13:52.899856] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:56.477 [2024-11-17 22:13:52.899917] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:56.477 [2024-11-17 22:13:52.899928] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:56.477 passed 00:13:56.477 Test: admin_identify_ctrlr_verify_fused ...passed 00:13:56.736 Test: admin_identify_ns ...[2024-11-17 22:13:53.140772] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:56.736 [2024-11-17 22:13:53.148760] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:56.736 passed 00:13:56.736 Test: admin_get_features_mandatory_features ...passed 00:13:56.995 Test: admin_get_features_optional_features ...passed 00:13:56.995 Test: admin_set_features_number_of_queues ...passed 00:13:57.254 Test: admin_get_log_page_mandatory_logs ...passed 00:13:57.254 Test: admin_get_log_page_with_lpo ...[2024-11-17 22:13:53.788766] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:57.254 passed 00:13:57.513 Test: fabric_property_get ...passed 00:13:57.513 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-17 22:13:53.977611] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:57.513 passed 00:13:57.773 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-17 22:13:54.146831] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:57.773 [2024-11-17 22:13:54.162822] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:57.773 passed 00:13:57.773 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-17 22:13:54.255028] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:57.773 passed 00:13:58.032 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-17 22:13:54.414834] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:58.032 [2024-11-17 22:13:54.436825] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:58.032 passed 00:13:58.032 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-17 22:13:54.527024] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:58.032 [2024-11-17 22:13:54.527130] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:58.032 passed 00:13:58.290 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-17 22:13:54.708749] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:58.290 [2024-11-17 22:13:54.716750] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:58.290 [2024-11-17 22:13:54.724748] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:58.290 [2024-11-17 22:13:54.732755] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:58.290 passed 00:13:58.290 Test: admin_create_io_sq_verify_pc ...[2024-11-17 22:13:54.861775] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:58.549 passed 00:13:59.487 Test: admin_create_io_qp_max_qps ...[2024-11-17 22:13:56.074787] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:00.055 passed 00:14:00.315 Test: admin_create_io_sq_shared_cq ...[2024-11-17 22:13:56.674830] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:00.315 passed 00:14:00.315 00:14:00.315 Run Summary: Type Total Ran Passed Failed Inactive 00:14:00.315 suites 1 1 n/a 0 0 00:14:00.315 tests 18 18 18 0 0 00:14:00.315 asserts 360 360 360 0 n/a 00:14:00.315 00:14:00.315 Elapsed time = 1.580 seconds 00:14:00.315 22:13:56 -- compliance/compliance.sh@42 -- # killprocess 71703 00:14:00.315 22:13:56 -- common/autotest_common.sh@936 -- # '[' -z 71703 ']' 00:14:00.315 22:13:56 -- common/autotest_common.sh@940 -- # kill -0 71703 00:14:00.315 22:13:56 -- common/autotest_common.sh@941 -- # uname 00:14:00.315 22:13:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:00.315 22:13:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71703 00:14:00.315 22:13:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:00.315 22:13:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:00.315 killing process with pid 71703 00:14:00.315 22:13:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71703' 00:14:00.315 22:13:56 -- common/autotest_common.sh@955 -- # kill 71703 00:14:00.315 22:13:56 -- common/autotest_common.sh@960 -- # wait 71703 00:14:00.884 22:13:57 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:00.884 22:13:57 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:00.884 00:14:00.884 real 0m6.895s 00:14:00.884 user 0m18.993s 00:14:00.884 sys 0m0.622s 00:14:00.884 22:13:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:00.884 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:14:00.884 ************************************ 00:14:00.884 END TEST nvmf_vfio_user_nvme_compliance 00:14:00.884 ************************************ 00:14:00.884 22:13:57 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:00.884 22:13:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:00.884 22:13:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.884 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:14:00.884 ************************************ 00:14:00.884 START TEST nvmf_vfio_user_fuzz 00:14:00.884 ************************************ 00:14:00.884 22:13:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:00.884 * Looking for test storage... 00:14:00.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:00.884 22:13:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:00.884 22:13:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:00.884 22:13:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:00.884 22:13:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:00.884 22:13:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:00.884 22:13:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:00.884 22:13:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:00.884 22:13:57 -- scripts/common.sh@335 -- # IFS=.-: 00:14:00.884 22:13:57 -- scripts/common.sh@335 -- # read -ra ver1 00:14:00.884 22:13:57 -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.884 22:13:57 -- scripts/common.sh@336 -- # read -ra ver2 00:14:00.884 22:13:57 -- scripts/common.sh@337 -- # local 'op=<' 00:14:00.884 22:13:57 -- scripts/common.sh@339 -- # ver1_l=2 00:14:00.884 22:13:57 -- scripts/common.sh@340 -- # ver2_l=1 00:14:00.884 22:13:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:00.884 22:13:57 -- scripts/common.sh@343 -- # case "$op" in 00:14:00.884 22:13:57 -- scripts/common.sh@344 -- # : 1 00:14:00.884 22:13:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:00.884 22:13:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.884 22:13:57 -- scripts/common.sh@364 -- # decimal 1 00:14:00.884 22:13:57 -- scripts/common.sh@352 -- # local d=1 00:14:00.884 22:13:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.884 22:13:57 -- scripts/common.sh@354 -- # echo 1 00:14:00.884 22:13:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:00.884 22:13:57 -- scripts/common.sh@365 -- # decimal 2 00:14:00.884 22:13:57 -- scripts/common.sh@352 -- # local d=2 00:14:00.884 22:13:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.884 22:13:57 -- scripts/common.sh@354 -- # echo 2 00:14:00.884 22:13:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:00.884 22:13:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:00.884 22:13:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:00.884 22:13:57 -- scripts/common.sh@367 -- # return 0 00:14:00.884 22:13:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.884 22:13:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:00.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.884 --rc genhtml_branch_coverage=1 00:14:00.884 --rc genhtml_function_coverage=1 00:14:00.884 --rc genhtml_legend=1 00:14:00.884 --rc geninfo_all_blocks=1 00:14:00.884 --rc geninfo_unexecuted_blocks=1 00:14:00.884 00:14:00.884 ' 00:14:00.884 22:13:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:00.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.885 --rc genhtml_branch_coverage=1 00:14:00.885 --rc genhtml_function_coverage=1 00:14:00.885 --rc genhtml_legend=1 00:14:00.885 --rc geninfo_all_blocks=1 00:14:00.885 --rc geninfo_unexecuted_blocks=1 00:14:00.885 00:14:00.885 ' 00:14:00.885 22:13:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:00.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.885 --rc genhtml_branch_coverage=1 00:14:00.885 --rc genhtml_function_coverage=1 00:14:00.885 --rc genhtml_legend=1 00:14:00.885 --rc geninfo_all_blocks=1 00:14:00.885 --rc geninfo_unexecuted_blocks=1 00:14:00.885 00:14:00.885 ' 00:14:00.885 22:13:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:00.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.885 --rc genhtml_branch_coverage=1 00:14:00.885 --rc genhtml_function_coverage=1 00:14:00.885 --rc genhtml_legend=1 00:14:00.885 --rc geninfo_all_blocks=1 00:14:00.885 --rc geninfo_unexecuted_blocks=1 00:14:00.885 00:14:00.885 ' 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.885 22:13:57 -- nvmf/common.sh@7 -- # uname -s 00:14:00.885 22:13:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.885 22:13:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.885 22:13:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.885 22:13:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.885 22:13:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.885 22:13:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.885 22:13:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.885 22:13:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.885 22:13:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.885 22:13:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.885 22:13:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:14:00.885 22:13:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:14:00.885 22:13:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.885 22:13:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.885 22:13:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.885 22:13:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.885 22:13:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.885 22:13:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.885 22:13:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.885 22:13:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.885 22:13:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.885 22:13:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.885 22:13:57 -- paths/export.sh@5 -- # export PATH 00:14:00.885 22:13:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.885 22:13:57 -- nvmf/common.sh@46 -- # : 0 00:14:00.885 22:13:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:00.885 22:13:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:00.885 22:13:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:00.885 22:13:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.885 22:13:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.885 22:13:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:00.885 22:13:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:00.885 22:13:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=71861 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 71861' 00:14:00.885 Process pid: 71861 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:00.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.885 22:13:57 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 71861 00:14:00.885 22:13:57 -- common/autotest_common.sh@829 -- # '[' -z 71861 ']' 00:14:00.885 22:13:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.885 22:13:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.885 22:13:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.885 22:13:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.885 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:14:02.323 22:13:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.323 22:13:58 -- common/autotest_common.sh@862 -- # return 0 00:14:02.323 22:13:58 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:03.260 22:13:59 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:03.260 22:13:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.260 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:14:03.260 22:13:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.260 22:13:59 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:03.260 22:13:59 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:03.260 22:13:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.260 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:14:03.260 malloc0 00:14:03.260 22:13:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.260 22:13:59 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:03.260 22:13:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.260 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:14:03.260 22:13:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.260 22:13:59 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:03.260 22:13:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.260 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:14:03.260 22:13:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.260 22:13:59 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:03.260 22:13:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.260 22:13:59 -- common/autotest_common.sh@10 -- # set +x 00:14:03.260 22:13:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.260 22:13:59 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:03.260 22:13:59 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:03.519 Shutting down the fuzz application 00:14:03.519 22:14:00 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:03.519 22:14:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.519 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:14:03.519 22:14:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.519 22:14:00 -- target/vfio_user_fuzz.sh@46 -- # killprocess 71861 00:14:03.519 22:14:00 -- common/autotest_common.sh@936 -- # '[' -z 71861 ']' 00:14:03.519 22:14:00 -- common/autotest_common.sh@940 -- # kill -0 71861 00:14:03.520 22:14:00 -- common/autotest_common.sh@941 -- # uname 00:14:03.520 22:14:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:03.520 22:14:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71861 00:14:03.520 22:14:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:03.520 22:14:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:03.520 22:14:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71861' 00:14:03.520 killing process with pid 71861 00:14:03.520 22:14:00 -- common/autotest_common.sh@955 -- # kill 71861 00:14:03.520 22:14:00 -- common/autotest_common.sh@960 -- # wait 71861 00:14:04.088 22:14:00 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:04.088 22:14:00 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:04.088 00:14:04.088 real 0m3.275s 00:14:04.088 user 0m3.602s 00:14:04.088 sys 0m0.499s 00:14:04.088 22:14:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:04.089 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:14:04.089 ************************************ 00:14:04.089 END TEST nvmf_vfio_user_fuzz 00:14:04.089 ************************************ 00:14:04.089 22:14:00 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:04.089 22:14:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:04.089 22:14:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.089 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:14:04.089 ************************************ 00:14:04.089 START TEST nvmf_host_management 00:14:04.089 ************************************ 00:14:04.089 22:14:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:04.089 * Looking for test storage... 00:14:04.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:04.089 22:14:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:04.089 22:14:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:04.089 22:14:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:04.348 22:14:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:04.348 22:14:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:04.348 22:14:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:04.348 22:14:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:04.348 22:14:00 -- scripts/common.sh@335 -- # IFS=.-: 00:14:04.348 22:14:00 -- scripts/common.sh@335 -- # read -ra ver1 00:14:04.348 22:14:00 -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.348 22:14:00 -- scripts/common.sh@336 -- # read -ra ver2 00:14:04.348 22:14:00 -- scripts/common.sh@337 -- # local 'op=<' 00:14:04.348 22:14:00 -- scripts/common.sh@339 -- # ver1_l=2 00:14:04.348 22:14:00 -- scripts/common.sh@340 -- # ver2_l=1 00:14:04.349 22:14:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:04.349 22:14:00 -- scripts/common.sh@343 -- # case "$op" in 00:14:04.349 22:14:00 -- scripts/common.sh@344 -- # : 1 00:14:04.349 22:14:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:04.349 22:14:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.349 22:14:00 -- scripts/common.sh@364 -- # decimal 1 00:14:04.349 22:14:00 -- scripts/common.sh@352 -- # local d=1 00:14:04.349 22:14:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.349 22:14:00 -- scripts/common.sh@354 -- # echo 1 00:14:04.349 22:14:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:04.349 22:14:00 -- scripts/common.sh@365 -- # decimal 2 00:14:04.349 22:14:00 -- scripts/common.sh@352 -- # local d=2 00:14:04.349 22:14:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.349 22:14:00 -- scripts/common.sh@354 -- # echo 2 00:14:04.349 22:14:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:04.349 22:14:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:04.349 22:14:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:04.349 22:14:00 -- scripts/common.sh@367 -- # return 0 00:14:04.349 22:14:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.349 22:14:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:04.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.349 --rc genhtml_branch_coverage=1 00:14:04.349 --rc genhtml_function_coverage=1 00:14:04.349 --rc genhtml_legend=1 00:14:04.349 --rc geninfo_all_blocks=1 00:14:04.349 --rc geninfo_unexecuted_blocks=1 00:14:04.349 00:14:04.349 ' 00:14:04.349 22:14:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:04.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.349 --rc genhtml_branch_coverage=1 00:14:04.349 --rc genhtml_function_coverage=1 00:14:04.349 --rc genhtml_legend=1 00:14:04.349 --rc geninfo_all_blocks=1 00:14:04.349 --rc geninfo_unexecuted_blocks=1 00:14:04.349 00:14:04.349 ' 00:14:04.349 22:14:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:04.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.349 --rc genhtml_branch_coverage=1 00:14:04.349 --rc genhtml_function_coverage=1 00:14:04.349 --rc genhtml_legend=1 00:14:04.349 --rc geninfo_all_blocks=1 00:14:04.349 --rc geninfo_unexecuted_blocks=1 00:14:04.349 00:14:04.349 ' 00:14:04.349 22:14:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:04.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.349 --rc genhtml_branch_coverage=1 00:14:04.349 --rc genhtml_function_coverage=1 00:14:04.349 --rc genhtml_legend=1 00:14:04.349 --rc geninfo_all_blocks=1 00:14:04.349 --rc geninfo_unexecuted_blocks=1 00:14:04.349 00:14:04.349 ' 00:14:04.349 22:14:00 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.349 22:14:00 -- nvmf/common.sh@7 -- # uname -s 00:14:04.349 22:14:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.349 22:14:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.349 22:14:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.349 22:14:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.349 22:14:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.349 22:14:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.349 22:14:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.349 22:14:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.349 22:14:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.349 22:14:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.349 22:14:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:14:04.349 22:14:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:14:04.349 22:14:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.349 22:14:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.349 22:14:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.349 22:14:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.349 22:14:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.349 22:14:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.349 22:14:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.349 22:14:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.349 22:14:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.349 22:14:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.349 22:14:00 -- paths/export.sh@5 -- # export PATH 00:14:04.349 22:14:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.349 22:14:00 -- nvmf/common.sh@46 -- # : 0 00:14:04.349 22:14:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:04.349 22:14:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:04.349 22:14:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:04.349 22:14:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.349 22:14:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.349 22:14:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:04.349 22:14:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:04.349 22:14:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:04.349 22:14:00 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.349 22:14:00 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.349 22:14:00 -- target/host_management.sh@104 -- # nvmftestinit 00:14:04.349 22:14:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:04.349 22:14:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.349 22:14:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:04.349 22:14:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:04.349 22:14:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:04.349 22:14:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.349 22:14:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.349 22:14:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.349 22:14:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:04.349 22:14:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:04.349 22:14:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:04.349 22:14:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:04.349 22:14:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:04.349 22:14:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:04.349 22:14:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.349 22:14:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.349 22:14:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:04.349 22:14:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:04.349 22:14:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:04.349 22:14:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:04.349 22:14:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:04.349 22:14:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.349 22:14:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:04.349 22:14:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:04.349 22:14:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:04.350 22:14:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:04.350 22:14:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:04.350 22:14:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:04.350 Cannot find device "nvmf_tgt_br" 00:14:04.350 22:14:00 -- nvmf/common.sh@154 -- # true 00:14:04.350 22:14:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.350 Cannot find device "nvmf_tgt_br2" 00:14:04.350 22:14:00 -- nvmf/common.sh@155 -- # true 00:14:04.350 22:14:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:04.350 22:14:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:04.350 Cannot find device "nvmf_tgt_br" 00:14:04.350 22:14:00 -- nvmf/common.sh@157 -- # true 00:14:04.350 22:14:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:04.350 Cannot find device "nvmf_tgt_br2" 00:14:04.350 22:14:00 -- nvmf/common.sh@158 -- # true 00:14:04.350 22:14:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:04.350 22:14:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:04.350 22:14:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.609 22:14:00 -- nvmf/common.sh@161 -- # true 00:14:04.609 22:14:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.609 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.609 22:14:00 -- nvmf/common.sh@162 -- # true 00:14:04.609 22:14:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:04.609 22:14:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:04.609 22:14:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:04.609 22:14:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:04.609 22:14:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:04.609 22:14:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:04.609 22:14:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:04.609 22:14:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:04.609 22:14:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:04.609 22:14:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:04.609 22:14:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:04.609 22:14:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:04.609 22:14:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:04.609 22:14:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:04.609 22:14:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:04.609 22:14:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:04.609 22:14:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:04.609 22:14:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:04.609 22:14:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:04.609 22:14:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:04.609 22:14:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:04.609 22:14:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:04.609 22:14:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:04.609 22:14:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:04.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:14:04.609 00:14:04.609 --- 10.0.0.2 ping statistics --- 00:14:04.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.609 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:04.609 22:14:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:04.609 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:04.609 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:14:04.609 00:14:04.609 --- 10.0.0.3 ping statistics --- 00:14:04.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.610 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:04.610 22:14:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:04.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:04.610 00:14:04.610 --- 10.0.0.1 ping statistics --- 00:14:04.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.610 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:04.610 22:14:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.610 22:14:01 -- nvmf/common.sh@421 -- # return 0 00:14:04.610 22:14:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:04.610 22:14:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.610 22:14:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:04.610 22:14:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:04.610 22:14:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.610 22:14:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:04.610 22:14:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:04.610 22:14:01 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:04.610 22:14:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:04.610 22:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.610 22:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:04.610 ************************************ 00:14:04.610 START TEST nvmf_host_management 00:14:04.610 ************************************ 00:14:04.610 22:14:01 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:04.610 22:14:01 -- target/host_management.sh@69 -- # starttarget 00:14:04.610 22:14:01 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:04.610 22:14:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:04.610 22:14:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.610 22:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:04.610 22:14:01 -- nvmf/common.sh@469 -- # nvmfpid=72110 00:14:04.610 22:14:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:04.610 22:14:01 -- nvmf/common.sh@470 -- # waitforlisten 72110 00:14:04.610 22:14:01 -- common/autotest_common.sh@829 -- # '[' -z 72110 ']' 00:14:04.610 22:14:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.610 22:14:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.610 22:14:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.610 22:14:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.610 22:14:01 -- common/autotest_common.sh@10 -- # set +x 00:14:04.869 [2024-11-17 22:14:01.259043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:04.869 [2024-11-17 22:14:01.259152] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.869 [2024-11-17 22:14:01.398655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.127 [2024-11-17 22:14:01.559341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:05.127 [2024-11-17 22:14:01.559557] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.127 [2024-11-17 22:14:01.559575] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.127 [2024-11-17 22:14:01.559586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.127 [2024-11-17 22:14:01.559836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.127 [2024-11-17 22:14:01.559999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.127 [2024-11-17 22:14:01.560848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:05.127 [2024-11-17 22:14:01.560878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.062 22:14:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.062 22:14:02 -- common/autotest_common.sh@862 -- # return 0 00:14:06.062 22:14:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:06.062 22:14:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.062 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:14:06.062 22:14:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.062 22:14:02 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.062 22:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.062 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:14:06.062 [2024-11-17 22:14:02.371601] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.062 22:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.062 22:14:02 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:06.062 22:14:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.062 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:14:06.062 22:14:02 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:06.062 22:14:02 -- target/host_management.sh@23 -- # cat 00:14:06.062 22:14:02 -- target/host_management.sh@30 -- # rpc_cmd 00:14:06.062 22:14:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.062 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:14:06.062 Malloc0 00:14:06.062 [2024-11-17 22:14:02.460770] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.062 22:14:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.062 22:14:02 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:06.063 22:14:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.063 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:14:06.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.063 22:14:02 -- target/host_management.sh@73 -- # perfpid=72183 00:14:06.063 22:14:02 -- target/host_management.sh@74 -- # waitforlisten 72183 /var/tmp/bdevperf.sock 00:14:06.063 22:14:02 -- common/autotest_common.sh@829 -- # '[' -z 72183 ']' 00:14:06.063 22:14:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.063 22:14:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.063 22:14:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.063 22:14:02 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:06.063 22:14:02 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:06.063 22:14:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.063 22:14:02 -- nvmf/common.sh@520 -- # config=() 00:14:06.063 22:14:02 -- common/autotest_common.sh@10 -- # set +x 00:14:06.063 22:14:02 -- nvmf/common.sh@520 -- # local subsystem config 00:14:06.063 22:14:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:06.063 22:14:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:06.063 { 00:14:06.063 "params": { 00:14:06.063 "name": "Nvme$subsystem", 00:14:06.063 "trtype": "$TEST_TRANSPORT", 00:14:06.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.063 "adrfam": "ipv4", 00:14:06.063 "trsvcid": "$NVMF_PORT", 00:14:06.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.063 "hdgst": ${hdgst:-false}, 00:14:06.063 "ddgst": ${ddgst:-false} 00:14:06.063 }, 00:14:06.063 "method": "bdev_nvme_attach_controller" 00:14:06.063 } 00:14:06.063 EOF 00:14:06.063 )") 00:14:06.063 22:14:02 -- nvmf/common.sh@542 -- # cat 00:14:06.063 22:14:02 -- nvmf/common.sh@544 -- # jq . 00:14:06.063 22:14:02 -- nvmf/common.sh@545 -- # IFS=, 00:14:06.063 22:14:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:06.063 "params": { 00:14:06.063 "name": "Nvme0", 00:14:06.063 "trtype": "tcp", 00:14:06.063 "traddr": "10.0.0.2", 00:14:06.063 "adrfam": "ipv4", 00:14:06.063 "trsvcid": "4420", 00:14:06.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:06.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:06.063 "hdgst": false, 00:14:06.063 "ddgst": false 00:14:06.063 }, 00:14:06.063 "method": "bdev_nvme_attach_controller" 00:14:06.063 }' 00:14:06.063 [2024-11-17 22:14:02.581725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:06.063 [2024-11-17 22:14:02.581878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72183 ] 00:14:06.321 [2024-11-17 22:14:02.725361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.321 [2024-11-17 22:14:02.890971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.580 Running I/O for 10 seconds... 00:14:07.148 22:14:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.148 22:14:03 -- common/autotest_common.sh@862 -- # return 0 00:14:07.148 22:14:03 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:07.148 22:14:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.148 22:14:03 -- common/autotest_common.sh@10 -- # set +x 00:14:07.148 22:14:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.148 22:14:03 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.148 22:14:03 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:07.148 22:14:03 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:07.148 22:14:03 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:07.148 22:14:03 -- target/host_management.sh@52 -- # local ret=1 00:14:07.148 22:14:03 -- target/host_management.sh@53 -- # local i 00:14:07.148 22:14:03 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:07.148 22:14:03 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:07.148 22:14:03 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:07.148 22:14:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.148 22:14:03 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:07.148 22:14:03 -- common/autotest_common.sh@10 -- # set +x 00:14:07.148 22:14:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.148 22:14:03 -- target/host_management.sh@55 -- # read_io_count=1586 00:14:07.148 22:14:03 -- target/host_management.sh@58 -- # '[' 1586 -ge 100 ']' 00:14:07.148 22:14:03 -- target/host_management.sh@59 -- # ret=0 00:14:07.148 22:14:03 -- target/host_management.sh@60 -- # break 00:14:07.148 22:14:03 -- target/host_management.sh@64 -- # return 0 00:14:07.148 22:14:03 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:07.148 22:14:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.148 22:14:03 -- common/autotest_common.sh@10 -- # set +x 00:14:07.148 [2024-11-17 22:14:03.658717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.148 [2024-11-17 22:14:03.658920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.658929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.658937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.658945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.658953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.658961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.658969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.658977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.658985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.658993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.659133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0910 is same with the state(5) to be set 00:14:07.149 [2024-11-17 22:14:03.660589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.660844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.661976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.661988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.149 [2024-11-17 22:14:03.662311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.149 [2024-11-17 22:14:03.662320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.662985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.662996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.663005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.663016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.663024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.663035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.663045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.663056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.150 [2024-11-17 22:14:03.663066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.150 [2024-11-17 22:14:03.663077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b0400 is same with the state(5) to be set 00:14:07.150 [2024-11-17 22:14:03.663185] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12b0400 was disconnected and freed. reset controller. 00:14:07.150 [2024-11-17 22:14:03.664370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:07.150 task offset: 95488 on job bdev=Nvme0n1 fails 00:14:07.150 00:14:07.150 Latency(us) 00:14:07.150 [2024-11-17T22:14:03.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.150 [2024-11-17T22:14:03.765Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:07.150 [2024-11-17T22:14:03.765Z] Job: Nvme0n1 ended in about 0.55 seconds with error 00:14:07.150 Verification LBA range: start 0x0 length 0x400 00:14:07.150 Nvme0n1 : 0.55 3164.69 197.79 115.87 0.00 19154.62 3247.01 24903.68 00:14:07.150 [2024-11-17T22:14:03.765Z] =================================================================================================================== 00:14:07.150 [2024-11-17T22:14:03.766Z] Total : 3164.69 197.79 115.87 0.00 19154.62 3247.01 24903.68 00:14:07.151 [2024-11-17 22:14:03.666417] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:07.151 [2024-11-17 22:14:03.666445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12dcdc0 (9): Bad file descriptor 00:14:07.151 22:14:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.151 22:14:03 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:07.151 22:14:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.151 [2024-11-17 22:14:03.667552] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:07.151 [2024-11-17 22:14:03.667646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:07.151 [2024-11-17 22:14:03.667670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.151 22:14:03 -- common/autotest_common.sh@10 -- # set +x 00:14:07.151 [2024-11-17 22:14:03.667688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:07.151 [2024-11-17 22:14:03.667699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:07.151 [2024-11-17 22:14:03.667709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:07.151 [2024-11-17 22:14:03.667718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dcdc0 00:14:07.151 [2024-11-17 22:14:03.667752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12dcdc0 (9): Bad file descriptor 00:14:07.151 [2024-11-17 22:14:03.667783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:07.151 [2024-11-17 22:14:03.667795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:07.151 [2024-11-17 22:14:03.667806] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:07.151 [2024-11-17 22:14:03.667825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:07.151 22:14:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.151 22:14:03 -- target/host_management.sh@87 -- # sleep 1 00:14:08.087 22:14:04 -- target/host_management.sh@91 -- # kill -9 72183 00:14:08.087 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72183) - No such process 00:14:08.087 22:14:04 -- target/host_management.sh@91 -- # true 00:14:08.087 22:14:04 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:08.087 22:14:04 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:08.087 22:14:04 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:08.087 22:14:04 -- nvmf/common.sh@520 -- # config=() 00:14:08.087 22:14:04 -- nvmf/common.sh@520 -- # local subsystem config 00:14:08.087 22:14:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:08.087 22:14:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:08.087 { 00:14:08.087 "params": { 00:14:08.087 "name": "Nvme$subsystem", 00:14:08.087 "trtype": "$TEST_TRANSPORT", 00:14:08.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.087 "adrfam": "ipv4", 00:14:08.087 "trsvcid": "$NVMF_PORT", 00:14:08.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.087 "hdgst": ${hdgst:-false}, 00:14:08.087 "ddgst": ${ddgst:-false} 00:14:08.087 }, 00:14:08.087 "method": "bdev_nvme_attach_controller" 00:14:08.087 } 00:14:08.087 EOF 00:14:08.087 )") 00:14:08.087 22:14:04 -- nvmf/common.sh@542 -- # cat 00:14:08.087 22:14:04 -- nvmf/common.sh@544 -- # jq . 00:14:08.087 22:14:04 -- nvmf/common.sh@545 -- # IFS=, 00:14:08.087 22:14:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:08.087 "params": { 00:14:08.087 "name": "Nvme0", 00:14:08.087 "trtype": "tcp", 00:14:08.087 "traddr": "10.0.0.2", 00:14:08.087 "adrfam": "ipv4", 00:14:08.087 "trsvcid": "4420", 00:14:08.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:08.087 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:08.087 "hdgst": false, 00:14:08.087 "ddgst": false 00:14:08.087 }, 00:14:08.087 "method": "bdev_nvme_attach_controller" 00:14:08.087 }' 00:14:08.345 [2024-11-17 22:14:04.747870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:08.345 [2024-11-17 22:14:04.748010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72234 ] 00:14:08.345 [2024-11-17 22:14:04.888873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.604 [2024-11-17 22:14:05.043208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.863 Running I/O for 1 seconds... 00:14:09.799 00:14:09.799 Latency(us) 00:14:09.799 [2024-11-17T22:14:06.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.799 [2024-11-17T22:14:06.414Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:09.799 Verification LBA range: start 0x0 length 0x400 00:14:09.799 Nvme0n1 : 1.01 3439.56 214.97 0.00 0.00 18284.15 1035.17 24546.21 00:14:09.799 [2024-11-17T22:14:06.414Z] =================================================================================================================== 00:14:09.799 [2024-11-17T22:14:06.414Z] Total : 3439.56 214.97 0.00 0.00 18284.15 1035.17 24546.21 00:14:10.368 22:14:06 -- target/host_management.sh@101 -- # stoptarget 00:14:10.368 22:14:06 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:10.368 22:14:06 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:10.368 22:14:06 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:10.368 22:14:06 -- target/host_management.sh@40 -- # nvmftestfini 00:14:10.368 22:14:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:10.368 22:14:06 -- nvmf/common.sh@116 -- # sync 00:14:10.368 22:14:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:10.368 22:14:06 -- nvmf/common.sh@119 -- # set +e 00:14:10.368 22:14:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:10.368 22:14:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:10.368 rmmod nvme_tcp 00:14:10.368 rmmod nvme_fabrics 00:14:10.368 rmmod nvme_keyring 00:14:10.368 22:14:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:10.368 22:14:06 -- nvmf/common.sh@123 -- # set -e 00:14:10.368 22:14:06 -- nvmf/common.sh@124 -- # return 0 00:14:10.368 22:14:06 -- nvmf/common.sh@477 -- # '[' -n 72110 ']' 00:14:10.368 22:14:06 -- nvmf/common.sh@478 -- # killprocess 72110 00:14:10.368 22:14:06 -- common/autotest_common.sh@936 -- # '[' -z 72110 ']' 00:14:10.368 22:14:06 -- common/autotest_common.sh@940 -- # kill -0 72110 00:14:10.368 22:14:06 -- common/autotest_common.sh@941 -- # uname 00:14:10.368 22:14:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:10.368 22:14:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72110 00:14:10.368 killing process with pid 72110 00:14:10.368 22:14:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:10.368 22:14:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:10.368 22:14:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72110' 00:14:10.368 22:14:06 -- common/autotest_common.sh@955 -- # kill 72110 00:14:10.368 22:14:06 -- common/autotest_common.sh@960 -- # wait 72110 00:14:10.936 [2024-11-17 22:14:07.249077] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:10.936 22:14:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:10.936 22:14:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:10.936 22:14:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:10.936 22:14:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.936 22:14:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:10.936 22:14:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.936 22:14:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.936 22:14:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.936 22:14:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:10.936 00:14:10.936 real 0m6.133s 00:14:10.936 user 0m25.464s 00:14:10.936 sys 0m1.599s 00:14:10.936 22:14:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:10.936 ************************************ 00:14:10.936 END TEST nvmf_host_management 00:14:10.936 ************************************ 00:14:10.936 22:14:07 -- common/autotest_common.sh@10 -- # set +x 00:14:10.936 22:14:07 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:10.936 00:14:10.936 real 0m6.777s 00:14:10.936 user 0m25.667s 00:14:10.936 sys 0m1.890s 00:14:10.936 ************************************ 00:14:10.936 END TEST nvmf_host_management 00:14:10.936 ************************************ 00:14:10.936 22:14:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:10.936 22:14:07 -- common/autotest_common.sh@10 -- # set +x 00:14:10.936 22:14:07 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:10.936 22:14:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:10.936 22:14:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:10.936 22:14:07 -- common/autotest_common.sh@10 -- # set +x 00:14:10.936 ************************************ 00:14:10.936 START TEST nvmf_lvol 00:14:10.936 ************************************ 00:14:10.936 22:14:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:10.936 * Looking for test storage... 00:14:10.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.936 22:14:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:10.936 22:14:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:10.936 22:14:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:11.196 22:14:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:11.196 22:14:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:11.196 22:14:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:11.196 22:14:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:11.197 22:14:07 -- scripts/common.sh@335 -- # IFS=.-: 00:14:11.197 22:14:07 -- scripts/common.sh@335 -- # read -ra ver1 00:14:11.197 22:14:07 -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.197 22:14:07 -- scripts/common.sh@336 -- # read -ra ver2 00:14:11.197 22:14:07 -- scripts/common.sh@337 -- # local 'op=<' 00:14:11.197 22:14:07 -- scripts/common.sh@339 -- # ver1_l=2 00:14:11.197 22:14:07 -- scripts/common.sh@340 -- # ver2_l=1 00:14:11.197 22:14:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:11.197 22:14:07 -- scripts/common.sh@343 -- # case "$op" in 00:14:11.197 22:14:07 -- scripts/common.sh@344 -- # : 1 00:14:11.197 22:14:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:11.197 22:14:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.197 22:14:07 -- scripts/common.sh@364 -- # decimal 1 00:14:11.197 22:14:07 -- scripts/common.sh@352 -- # local d=1 00:14:11.197 22:14:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.197 22:14:07 -- scripts/common.sh@354 -- # echo 1 00:14:11.197 22:14:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:11.197 22:14:07 -- scripts/common.sh@365 -- # decimal 2 00:14:11.197 22:14:07 -- scripts/common.sh@352 -- # local d=2 00:14:11.197 22:14:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.197 22:14:07 -- scripts/common.sh@354 -- # echo 2 00:14:11.197 22:14:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:11.197 22:14:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:11.197 22:14:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:11.197 22:14:07 -- scripts/common.sh@367 -- # return 0 00:14:11.197 22:14:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.197 22:14:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:11.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.197 --rc genhtml_branch_coverage=1 00:14:11.197 --rc genhtml_function_coverage=1 00:14:11.197 --rc genhtml_legend=1 00:14:11.197 --rc geninfo_all_blocks=1 00:14:11.197 --rc geninfo_unexecuted_blocks=1 00:14:11.197 00:14:11.197 ' 00:14:11.197 22:14:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:11.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.197 --rc genhtml_branch_coverage=1 00:14:11.197 --rc genhtml_function_coverage=1 00:14:11.197 --rc genhtml_legend=1 00:14:11.197 --rc geninfo_all_blocks=1 00:14:11.197 --rc geninfo_unexecuted_blocks=1 00:14:11.197 00:14:11.197 ' 00:14:11.197 22:14:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:11.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.197 --rc genhtml_branch_coverage=1 00:14:11.197 --rc genhtml_function_coverage=1 00:14:11.197 --rc genhtml_legend=1 00:14:11.197 --rc geninfo_all_blocks=1 00:14:11.197 --rc geninfo_unexecuted_blocks=1 00:14:11.197 00:14:11.197 ' 00:14:11.197 22:14:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:11.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.197 --rc genhtml_branch_coverage=1 00:14:11.197 --rc genhtml_function_coverage=1 00:14:11.197 --rc genhtml_legend=1 00:14:11.197 --rc geninfo_all_blocks=1 00:14:11.197 --rc geninfo_unexecuted_blocks=1 00:14:11.197 00:14:11.197 ' 00:14:11.197 22:14:07 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:11.197 22:14:07 -- nvmf/common.sh@7 -- # uname -s 00:14:11.197 22:14:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.197 22:14:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.197 22:14:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.197 22:14:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.197 22:14:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.197 22:14:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.197 22:14:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.197 22:14:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.197 22:14:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.197 22:14:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.197 22:14:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:14:11.197 22:14:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:14:11.197 22:14:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.197 22:14:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.197 22:14:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:11.197 22:14:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:11.197 22:14:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.197 22:14:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.197 22:14:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.197 22:14:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.197 22:14:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.197 22:14:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.197 22:14:07 -- paths/export.sh@5 -- # export PATH 00:14:11.197 22:14:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.197 22:14:07 -- nvmf/common.sh@46 -- # : 0 00:14:11.197 22:14:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:11.197 22:14:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:11.197 22:14:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:11.197 22:14:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.197 22:14:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.197 22:14:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:11.197 22:14:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:11.197 22:14:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:11.197 22:14:07 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.197 22:14:07 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.197 22:14:07 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:11.197 22:14:07 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:11.197 22:14:07 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:11.197 22:14:07 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:11.197 22:14:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:11.197 22:14:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.197 22:14:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:11.197 22:14:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:11.197 22:14:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:11.197 22:14:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.197 22:14:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.197 22:14:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.197 22:14:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:11.197 22:14:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:11.197 22:14:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:11.197 22:14:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:11.197 22:14:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:11.197 22:14:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:11.197 22:14:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.197 22:14:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.197 22:14:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:11.197 22:14:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:11.197 22:14:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:11.197 22:14:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:11.197 22:14:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:11.197 22:14:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.197 22:14:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:11.197 22:14:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:11.197 22:14:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:11.197 22:14:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:11.197 22:14:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:11.197 22:14:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:11.197 Cannot find device "nvmf_tgt_br" 00:14:11.197 22:14:07 -- nvmf/common.sh@154 -- # true 00:14:11.197 22:14:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.197 Cannot find device "nvmf_tgt_br2" 00:14:11.197 22:14:07 -- nvmf/common.sh@155 -- # true 00:14:11.197 22:14:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:11.197 22:14:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:11.197 Cannot find device "nvmf_tgt_br" 00:14:11.197 22:14:07 -- nvmf/common.sh@157 -- # true 00:14:11.197 22:14:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:11.197 Cannot find device "nvmf_tgt_br2" 00:14:11.197 22:14:07 -- nvmf/common.sh@158 -- # true 00:14:11.197 22:14:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:11.197 22:14:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:11.197 22:14:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.198 22:14:07 -- nvmf/common.sh@161 -- # true 00:14:11.198 22:14:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.198 22:14:07 -- nvmf/common.sh@162 -- # true 00:14:11.198 22:14:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:11.198 22:14:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:11.457 22:14:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:11.457 22:14:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:11.457 22:14:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:11.457 22:14:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:11.457 22:14:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:11.457 22:14:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:11.457 22:14:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:11.457 22:14:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:11.457 22:14:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:11.457 22:14:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:11.457 22:14:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:11.457 22:14:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:11.457 22:14:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:11.457 22:14:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:11.457 22:14:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:11.457 22:14:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:11.457 22:14:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:11.457 22:14:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:11.457 22:14:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:11.457 22:14:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:11.457 22:14:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:11.457 22:14:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:11.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:14:11.457 00:14:11.457 --- 10.0.0.2 ping statistics --- 00:14:11.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.457 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:11.457 22:14:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:11.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:11.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:14:11.457 00:14:11.457 --- 10.0.0.3 ping statistics --- 00:14:11.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.457 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:11.457 22:14:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:11.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:14:11.457 00:14:11.457 --- 10.0.0.1 ping statistics --- 00:14:11.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.457 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:11.457 22:14:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.457 22:14:07 -- nvmf/common.sh@421 -- # return 0 00:14:11.457 22:14:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:11.457 22:14:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.457 22:14:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:11.457 22:14:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:11.457 22:14:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.457 22:14:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:11.457 22:14:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:11.457 22:14:07 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:11.457 22:14:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:11.457 22:14:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.457 22:14:07 -- common/autotest_common.sh@10 -- # set +x 00:14:11.457 22:14:08 -- nvmf/common.sh@469 -- # nvmfpid=72481 00:14:11.457 22:14:08 -- nvmf/common.sh@470 -- # waitforlisten 72481 00:14:11.457 22:14:08 -- common/autotest_common.sh@829 -- # '[' -z 72481 ']' 00:14:11.457 22:14:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.457 22:14:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:11.457 22:14:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.457 22:14:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.457 22:14:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.457 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:14:11.457 [2024-11-17 22:14:08.064772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:11.457 [2024-11-17 22:14:08.064893] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.716 [2024-11-17 22:14:08.208584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.976 [2024-11-17 22:14:08.373325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:11.976 [2024-11-17 22:14:08.373535] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.976 [2024-11-17 22:14:08.373550] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.976 [2024-11-17 22:14:08.373562] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.976 [2024-11-17 22:14:08.373732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.976 [2024-11-17 22:14:08.374339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.976 [2024-11-17 22:14:08.374352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.545 22:14:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.545 22:14:09 -- common/autotest_common.sh@862 -- # return 0 00:14:12.545 22:14:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:12.545 22:14:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:12.545 22:14:09 -- common/autotest_common.sh@10 -- # set +x 00:14:12.545 22:14:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.545 22:14:09 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:12.802 [2024-11-17 22:14:09.367832] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.802 22:14:09 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:13.369 22:14:09 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:13.369 22:14:09 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:13.369 22:14:09 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:13.369 22:14:09 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:13.627 22:14:10 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:14.196 22:14:10 -- target/nvmf_lvol.sh@29 -- # lvs=b200ecad-a817-423d-a4a7-10719210c501 00:14:14.196 22:14:10 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b200ecad-a817-423d-a4a7-10719210c501 lvol 20 00:14:14.196 22:14:10 -- target/nvmf_lvol.sh@32 -- # lvol=4be2713c-487c-435f-ab96-a6f3820fd7f4 00:14:14.196 22:14:10 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:14.455 22:14:11 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4be2713c-487c-435f-ab96-a6f3820fd7f4 00:14:15.023 22:14:11 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:15.023 [2024-11-17 22:14:11.537429] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.023 22:14:11 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:15.283 22:14:11 -- target/nvmf_lvol.sh@42 -- # perf_pid=72623 00:14:15.283 22:14:11 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:15.283 22:14:11 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:16.237 22:14:12 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4be2713c-487c-435f-ab96-a6f3820fd7f4 MY_SNAPSHOT 00:14:16.805 22:14:13 -- target/nvmf_lvol.sh@47 -- # snapshot=58ab90ce-2e6a-4a25-b149-4dbecbc6866b 00:14:16.805 22:14:13 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4be2713c-487c-435f-ab96-a6f3820fd7f4 30 00:14:17.064 22:14:13 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 58ab90ce-2e6a-4a25-b149-4dbecbc6866b MY_CLONE 00:14:17.323 22:14:13 -- target/nvmf_lvol.sh@49 -- # clone=24593a7f-9350-4cf3-bed6-c86bf1a88c36 00:14:17.323 22:14:13 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 24593a7f-9350-4cf3-bed6-c86bf1a88c36 00:14:17.891 22:14:14 -- target/nvmf_lvol.sh@53 -- # wait 72623 00:14:26.009 Initializing NVMe Controllers 00:14:26.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:26.009 Controller IO queue size 128, less than required. 00:14:26.009 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:26.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:26.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:26.009 Initialization complete. Launching workers. 00:14:26.009 ======================================================== 00:14:26.009 Latency(us) 00:14:26.009 Device Information : IOPS MiB/s Average min max 00:14:26.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10406.09 40.65 12299.89 2580.42 66003.08 00:14:26.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10265.49 40.10 12472.91 1380.64 72964.04 00:14:26.009 ======================================================== 00:14:26.009 Total : 20671.58 80.75 12385.81 1380.64 72964.04 00:14:26.009 00:14:26.009 22:14:22 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:26.009 22:14:22 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4be2713c-487c-435f-ab96-a6f3820fd7f4 00:14:26.268 22:14:22 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b200ecad-a817-423d-a4a7-10719210c501 00:14:26.527 22:14:23 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:26.527 22:14:23 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:26.527 22:14:23 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:26.527 22:14:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:26.527 22:14:23 -- nvmf/common.sh@116 -- # sync 00:14:26.527 22:14:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:26.527 22:14:23 -- nvmf/common.sh@119 -- # set +e 00:14:26.527 22:14:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:26.527 22:14:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:26.786 rmmod nvme_tcp 00:14:26.786 rmmod nvme_fabrics 00:14:26.786 rmmod nvme_keyring 00:14:26.786 22:14:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:26.786 22:14:23 -- nvmf/common.sh@123 -- # set -e 00:14:26.786 22:14:23 -- nvmf/common.sh@124 -- # return 0 00:14:26.786 22:14:23 -- nvmf/common.sh@477 -- # '[' -n 72481 ']' 00:14:26.786 22:14:23 -- nvmf/common.sh@478 -- # killprocess 72481 00:14:26.786 22:14:23 -- common/autotest_common.sh@936 -- # '[' -z 72481 ']' 00:14:26.786 22:14:23 -- common/autotest_common.sh@940 -- # kill -0 72481 00:14:26.786 22:14:23 -- common/autotest_common.sh@941 -- # uname 00:14:26.786 22:14:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:26.786 22:14:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72481 00:14:26.786 killing process with pid 72481 00:14:26.786 22:14:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:26.786 22:14:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:26.786 22:14:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72481' 00:14:26.786 22:14:23 -- common/autotest_common.sh@955 -- # kill 72481 00:14:26.786 22:14:23 -- common/autotest_common.sh@960 -- # wait 72481 00:14:27.354 22:14:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:27.354 22:14:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:27.354 22:14:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:27.354 22:14:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.354 22:14:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:27.354 22:14:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.354 22:14:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.354 22:14:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.354 22:14:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:27.354 00:14:27.354 real 0m16.286s 00:14:27.354 user 1m7.305s 00:14:27.354 sys 0m3.632s 00:14:27.354 22:14:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:27.354 ************************************ 00:14:27.354 END TEST nvmf_lvol 00:14:27.354 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:14:27.354 ************************************ 00:14:27.354 22:14:23 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:27.354 22:14:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:27.354 22:14:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:27.354 22:14:23 -- common/autotest_common.sh@10 -- # set +x 00:14:27.354 ************************************ 00:14:27.354 START TEST nvmf_lvs_grow 00:14:27.354 ************************************ 00:14:27.354 22:14:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:27.354 * Looking for test storage... 00:14:27.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:27.354 22:14:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:27.354 22:14:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:27.354 22:14:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:27.354 22:14:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:27.354 22:14:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:27.354 22:14:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:27.354 22:14:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:27.354 22:14:23 -- scripts/common.sh@335 -- # IFS=.-: 00:14:27.354 22:14:23 -- scripts/common.sh@335 -- # read -ra ver1 00:14:27.354 22:14:23 -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.354 22:14:23 -- scripts/common.sh@336 -- # read -ra ver2 00:14:27.354 22:14:23 -- scripts/common.sh@337 -- # local 'op=<' 00:14:27.354 22:14:23 -- scripts/common.sh@339 -- # ver1_l=2 00:14:27.354 22:14:23 -- scripts/common.sh@340 -- # ver2_l=1 00:14:27.354 22:14:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:27.354 22:14:23 -- scripts/common.sh@343 -- # case "$op" in 00:14:27.354 22:14:23 -- scripts/common.sh@344 -- # : 1 00:14:27.354 22:14:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:27.354 22:14:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.354 22:14:23 -- scripts/common.sh@364 -- # decimal 1 00:14:27.354 22:14:23 -- scripts/common.sh@352 -- # local d=1 00:14:27.354 22:14:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.354 22:14:23 -- scripts/common.sh@354 -- # echo 1 00:14:27.354 22:14:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:27.354 22:14:23 -- scripts/common.sh@365 -- # decimal 2 00:14:27.354 22:14:23 -- scripts/common.sh@352 -- # local d=2 00:14:27.354 22:14:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.354 22:14:23 -- scripts/common.sh@354 -- # echo 2 00:14:27.354 22:14:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:27.354 22:14:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:27.354 22:14:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:27.354 22:14:23 -- scripts/common.sh@367 -- # return 0 00:14:27.354 22:14:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.354 22:14:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:27.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.354 --rc genhtml_branch_coverage=1 00:14:27.354 --rc genhtml_function_coverage=1 00:14:27.354 --rc genhtml_legend=1 00:14:27.354 --rc geninfo_all_blocks=1 00:14:27.354 --rc geninfo_unexecuted_blocks=1 00:14:27.354 00:14:27.354 ' 00:14:27.354 22:14:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:27.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.354 --rc genhtml_branch_coverage=1 00:14:27.354 --rc genhtml_function_coverage=1 00:14:27.354 --rc genhtml_legend=1 00:14:27.354 --rc geninfo_all_blocks=1 00:14:27.354 --rc geninfo_unexecuted_blocks=1 00:14:27.354 00:14:27.354 ' 00:14:27.354 22:14:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:27.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.354 --rc genhtml_branch_coverage=1 00:14:27.354 --rc genhtml_function_coverage=1 00:14:27.354 --rc genhtml_legend=1 00:14:27.354 --rc geninfo_all_blocks=1 00:14:27.354 --rc geninfo_unexecuted_blocks=1 00:14:27.354 00:14:27.354 ' 00:14:27.354 22:14:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:27.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.354 --rc genhtml_branch_coverage=1 00:14:27.354 --rc genhtml_function_coverage=1 00:14:27.354 --rc genhtml_legend=1 00:14:27.354 --rc geninfo_all_blocks=1 00:14:27.354 --rc geninfo_unexecuted_blocks=1 00:14:27.354 00:14:27.354 ' 00:14:27.354 22:14:23 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:27.354 22:14:23 -- nvmf/common.sh@7 -- # uname -s 00:14:27.354 22:14:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.354 22:14:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.354 22:14:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.354 22:14:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.354 22:14:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.354 22:14:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.354 22:14:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.354 22:14:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.354 22:14:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.354 22:14:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.614 22:14:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:14:27.614 22:14:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:14:27.614 22:14:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.614 22:14:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.614 22:14:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:27.614 22:14:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:27.614 22:14:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.614 22:14:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.614 22:14:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.614 22:14:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.614 22:14:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.614 22:14:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.614 22:14:23 -- paths/export.sh@5 -- # export PATH 00:14:27.614 22:14:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.614 22:14:23 -- nvmf/common.sh@46 -- # : 0 00:14:27.614 22:14:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:27.614 22:14:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:27.614 22:14:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:27.614 22:14:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.614 22:14:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.614 22:14:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:27.614 22:14:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:27.614 22:14:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:27.614 22:14:23 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.614 22:14:23 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.614 22:14:23 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:27.614 22:14:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:27.614 22:14:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.614 22:14:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:27.614 22:14:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:27.614 22:14:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:27.614 22:14:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.614 22:14:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.614 22:14:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.614 22:14:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:27.614 22:14:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:27.614 22:14:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:27.614 22:14:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:27.614 22:14:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:27.614 22:14:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:27.614 22:14:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.614 22:14:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.614 22:14:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:27.614 22:14:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:27.614 22:14:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:27.614 22:14:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:27.614 22:14:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:27.614 22:14:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.614 22:14:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:27.614 22:14:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:27.614 22:14:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:27.614 22:14:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:27.614 22:14:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:27.614 22:14:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:27.614 Cannot find device "nvmf_tgt_br" 00:14:27.614 22:14:24 -- nvmf/common.sh@154 -- # true 00:14:27.614 22:14:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:27.614 Cannot find device "nvmf_tgt_br2" 00:14:27.614 22:14:24 -- nvmf/common.sh@155 -- # true 00:14:27.614 22:14:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:27.614 22:14:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:27.614 Cannot find device "nvmf_tgt_br" 00:14:27.614 22:14:24 -- nvmf/common.sh@157 -- # true 00:14:27.614 22:14:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:27.614 Cannot find device "nvmf_tgt_br2" 00:14:27.614 22:14:24 -- nvmf/common.sh@158 -- # true 00:14:27.614 22:14:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:27.614 22:14:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:27.614 22:14:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:27.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.614 22:14:24 -- nvmf/common.sh@161 -- # true 00:14:27.614 22:14:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:27.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.614 22:14:24 -- nvmf/common.sh@162 -- # true 00:14:27.614 22:14:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:27.614 22:14:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:27.614 22:14:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:27.614 22:14:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:27.614 22:14:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:27.614 22:14:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:27.614 22:14:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:27.614 22:14:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:27.614 22:14:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:27.614 22:14:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:27.614 22:14:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:27.614 22:14:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:27.614 22:14:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:27.614 22:14:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:27.614 22:14:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:27.874 22:14:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:27.874 22:14:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:27.874 22:14:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:27.874 22:14:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:27.874 22:14:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:27.874 22:14:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:27.874 22:14:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:27.874 22:14:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:27.874 22:14:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:27.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:14:27.874 00:14:27.874 --- 10.0.0.2 ping statistics --- 00:14:27.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.874 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:27.874 22:14:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:27.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:27.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:14:27.874 00:14:27.874 --- 10.0.0.3 ping statistics --- 00:14:27.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.874 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:27.874 22:14:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:27.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:27.874 00:14:27.874 --- 10.0.0.1 ping statistics --- 00:14:27.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.874 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:27.874 22:14:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.874 22:14:24 -- nvmf/common.sh@421 -- # return 0 00:14:27.874 22:14:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:27.874 22:14:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.874 22:14:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:27.874 22:14:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:27.874 22:14:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.874 22:14:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:27.874 22:14:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:27.874 22:14:24 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:27.874 22:14:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:27.874 22:14:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.874 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:14:27.874 22:14:24 -- nvmf/common.sh@469 -- # nvmfpid=73003 00:14:27.874 22:14:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:27.874 22:14:24 -- nvmf/common.sh@470 -- # waitforlisten 73003 00:14:27.874 22:14:24 -- common/autotest_common.sh@829 -- # '[' -z 73003 ']' 00:14:27.874 22:14:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.874 22:14:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.874 22:14:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.874 22:14:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.874 22:14:24 -- common/autotest_common.sh@10 -- # set +x 00:14:27.874 [2024-11-17 22:14:24.385188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:27.874 [2024-11-17 22:14:24.385281] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.133 [2024-11-17 22:14:24.513687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.133 [2024-11-17 22:14:24.655708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:28.133 [2024-11-17 22:14:24.655900] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.133 [2024-11-17 22:14:24.655916] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.133 [2024-11-17 22:14:24.655925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.133 [2024-11-17 22:14:24.655954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.069 22:14:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.069 22:14:25 -- common/autotest_common.sh@862 -- # return 0 00:14:29.069 22:14:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:29.069 22:14:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.069 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:14:29.069 22:14:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.069 22:14:25 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:29.327 [2024-11-17 22:14:25.687838] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:29.327 22:14:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:29.327 22:14:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.327 22:14:25 -- common/autotest_common.sh@10 -- # set +x 00:14:29.327 ************************************ 00:14:29.327 START TEST lvs_grow_clean 00:14:29.327 ************************************ 00:14:29.327 22:14:25 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:29.327 22:14:25 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:29.585 22:14:26 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:29.585 22:14:26 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:29.844 22:14:26 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:29.844 22:14:26 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:29.844 22:14:26 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:30.103 22:14:26 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:30.103 22:14:26 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:30.103 22:14:26 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 lvol 150 00:14:30.362 22:14:26 -- target/nvmf_lvs_grow.sh@33 -- # lvol=bdaded0f-7d6d-4c16-b697-aaf0d56eb281 00:14:30.362 22:14:26 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:30.362 22:14:26 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:30.620 [2024-11-17 22:14:27.105683] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:30.620 [2024-11-17 22:14:27.105826] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:30.620 true 00:14:30.620 22:14:27 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:30.620 22:14:27 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:30.879 22:14:27 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:30.879 22:14:27 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:31.138 22:14:27 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdaded0f-7d6d-4c16-b697-aaf0d56eb281 00:14:31.397 22:14:27 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:31.656 [2024-11-17 22:14:28.146478] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.656 22:14:28 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.915 22:14:28 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73170 00:14:31.915 22:14:28 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:31.915 22:14:28 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.915 22:14:28 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73170 /var/tmp/bdevperf.sock 00:14:31.915 22:14:28 -- common/autotest_common.sh@829 -- # '[' -z 73170 ']' 00:14:31.915 22:14:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.915 22:14:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.915 22:14:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.915 22:14:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.915 22:14:28 -- common/autotest_common.sh@10 -- # set +x 00:14:31.915 [2024-11-17 22:14:28.464958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:31.915 [2024-11-17 22:14:28.465073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73170 ] 00:14:32.174 [2024-11-17 22:14:28.604194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.174 [2024-11-17 22:14:28.747608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.109 22:14:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.109 22:14:29 -- common/autotest_common.sh@862 -- # return 0 00:14:33.110 22:14:29 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:33.368 Nvme0n1 00:14:33.368 22:14:29 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:33.627 [ 00:14:33.627 { 00:14:33.627 "aliases": [ 00:14:33.627 "bdaded0f-7d6d-4c16-b697-aaf0d56eb281" 00:14:33.627 ], 00:14:33.627 "assigned_rate_limits": { 00:14:33.627 "r_mbytes_per_sec": 0, 00:14:33.627 "rw_ios_per_sec": 0, 00:14:33.627 "rw_mbytes_per_sec": 0, 00:14:33.627 "w_mbytes_per_sec": 0 00:14:33.627 }, 00:14:33.627 "block_size": 4096, 00:14:33.627 "claimed": false, 00:14:33.627 "driver_specific": { 00:14:33.627 "mp_policy": "active_passive", 00:14:33.627 "nvme": [ 00:14:33.627 { 00:14:33.627 "ctrlr_data": { 00:14:33.627 "ana_reporting": false, 00:14:33.627 "cntlid": 1, 00:14:33.627 "firmware_revision": "24.01.1", 00:14:33.627 "model_number": "SPDK bdev Controller", 00:14:33.627 "multi_ctrlr": true, 00:14:33.627 "oacs": { 00:14:33.627 "firmware": 0, 00:14:33.627 "format": 0, 00:14:33.627 "ns_manage": 0, 00:14:33.627 "security": 0 00:14:33.627 }, 00:14:33.627 "serial_number": "SPDK0", 00:14:33.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:33.627 "vendor_id": "0x8086" 00:14:33.627 }, 00:14:33.627 "ns_data": { 00:14:33.627 "can_share": true, 00:14:33.627 "id": 1 00:14:33.627 }, 00:14:33.627 "trid": { 00:14:33.627 "adrfam": "IPv4", 00:14:33.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:33.627 "traddr": "10.0.0.2", 00:14:33.627 "trsvcid": "4420", 00:14:33.627 "trtype": "TCP" 00:14:33.627 }, 00:14:33.627 "vs": { 00:14:33.627 "nvme_version": "1.3" 00:14:33.627 } 00:14:33.627 } 00:14:33.627 ] 00:14:33.627 }, 00:14:33.627 "name": "Nvme0n1", 00:14:33.627 "num_blocks": 38912, 00:14:33.627 "product_name": "NVMe disk", 00:14:33.627 "supported_io_types": { 00:14:33.627 "abort": true, 00:14:33.627 "compare": true, 00:14:33.627 "compare_and_write": true, 00:14:33.627 "flush": true, 00:14:33.627 "nvme_admin": true, 00:14:33.627 "nvme_io": true, 00:14:33.627 "read": true, 00:14:33.627 "reset": true, 00:14:33.627 "unmap": true, 00:14:33.628 "write": true, 00:14:33.628 "write_zeroes": true 00:14:33.628 }, 00:14:33.628 "uuid": "bdaded0f-7d6d-4c16-b697-aaf0d56eb281", 00:14:33.628 "zoned": false 00:14:33.628 } 00:14:33.628 ] 00:14:33.628 22:14:30 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73212 00:14:33.628 22:14:30 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.628 22:14:30 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:33.886 Running I/O for 10 seconds... 00:14:34.823 Latency(us) 00:14:34.823 [2024-11-17T22:14:31.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.824 [2024-11-17T22:14:31.439Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.824 Nvme0n1 : 1.00 8742.00 34.15 0.00 0.00 0.00 0.00 0.00 00:14:34.824 [2024-11-17T22:14:31.439Z] =================================================================================================================== 00:14:34.824 [2024-11-17T22:14:31.439Z] Total : 8742.00 34.15 0.00 0.00 0.00 0.00 0.00 00:14:34.824 00:14:35.761 22:14:32 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:35.761 [2024-11-17T22:14:32.376Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.761 Nvme0n1 : 2.00 8803.00 34.39 0.00 0.00 0.00 0.00 0.00 00:14:35.761 [2024-11-17T22:14:32.376Z] =================================================================================================================== 00:14:35.761 [2024-11-17T22:14:32.376Z] Total : 8803.00 34.39 0.00 0.00 0.00 0.00 0.00 00:14:35.761 00:14:36.020 true 00:14:36.021 22:14:32 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:36.021 22:14:32 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:36.280 22:14:32 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:36.280 22:14:32 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:36.280 22:14:32 -- target/nvmf_lvs_grow.sh@65 -- # wait 73212 00:14:36.848 [2024-11-17T22:14:33.463Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.848 Nvme0n1 : 3.00 8744.33 34.16 0.00 0.00 0.00 0.00 0.00 00:14:36.848 [2024-11-17T22:14:33.463Z] =================================================================================================================== 00:14:36.848 [2024-11-17T22:14:33.463Z] Total : 8744.33 34.16 0.00 0.00 0.00 0.00 0.00 00:14:36.848 00:14:37.785 [2024-11-17T22:14:34.400Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.785 Nvme0n1 : 4.00 8791.50 34.34 0.00 0.00 0.00 0.00 0.00 00:14:37.785 [2024-11-17T22:14:34.400Z] =================================================================================================================== 00:14:37.785 [2024-11-17T22:14:34.400Z] Total : 8791.50 34.34 0.00 0.00 0.00 0.00 0.00 00:14:37.785 00:14:38.722 [2024-11-17T22:14:35.337Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.722 Nvme0n1 : 5.00 8812.60 34.42 0.00 0.00 0.00 0.00 0.00 00:14:38.722 [2024-11-17T22:14:35.337Z] =================================================================================================================== 00:14:38.722 [2024-11-17T22:14:35.337Z] Total : 8812.60 34.42 0.00 0.00 0.00 0.00 0.00 00:14:38.722 00:14:40.145 [2024-11-17T22:14:36.760Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.145 Nvme0n1 : 6.00 8976.33 35.06 0.00 0.00 0.00 0.00 0.00 00:14:40.145 [2024-11-17T22:14:36.760Z] =================================================================================================================== 00:14:40.145 [2024-11-17T22:14:36.760Z] Total : 8976.33 35.06 0.00 0.00 0.00 0.00 0.00 00:14:40.145 00:14:40.720 [2024-11-17T22:14:37.335Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.720 Nvme0n1 : 7.00 8711.00 34.03 0.00 0.00 0.00 0.00 0.00 00:14:40.720 [2024-11-17T22:14:37.335Z] =================================================================================================================== 00:14:40.720 [2024-11-17T22:14:37.335Z] Total : 8711.00 34.03 0.00 0.00 0.00 0.00 0.00 00:14:40.720 00:14:42.097 [2024-11-17T22:14:38.712Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.097 Nvme0n1 : 8.00 8479.88 33.12 0.00 0.00 0.00 0.00 0.00 00:14:42.097 [2024-11-17T22:14:38.712Z] =================================================================================================================== 00:14:42.097 [2024-11-17T22:14:38.712Z] Total : 8479.88 33.12 0.00 0.00 0.00 0.00 0.00 00:14:42.097 00:14:43.033 [2024-11-17T22:14:39.648Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.033 Nvme0n1 : 9.00 8324.44 32.52 0.00 0.00 0.00 0.00 0.00 00:14:43.033 [2024-11-17T22:14:39.648Z] =================================================================================================================== 00:14:43.033 [2024-11-17T22:14:39.648Z] Total : 8324.44 32.52 0.00 0.00 0.00 0.00 0.00 00:14:43.033 00:14:43.970 [2024-11-17T22:14:40.585Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.970 Nvme0n1 : 10.00 8204.50 32.05 0.00 0.00 0.00 0.00 0.00 00:14:43.970 [2024-11-17T22:14:40.585Z] =================================================================================================================== 00:14:43.970 [2024-11-17T22:14:40.585Z] Total : 8204.50 32.05 0.00 0.00 0.00 0.00 0.00 00:14:43.970 00:14:43.970 00:14:43.970 Latency(us) 00:14:43.970 [2024-11-17T22:14:40.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.970 [2024-11-17T22:14:40.585Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.970 Nvme0n1 : 10.01 8205.09 32.05 0.00 0.00 15595.71 6702.55 70063.94 00:14:43.970 [2024-11-17T22:14:40.585Z] =================================================================================================================== 00:14:43.970 [2024-11-17T22:14:40.585Z] Total : 8205.09 32.05 0.00 0.00 15595.71 6702.55 70063.94 00:14:43.970 0 00:14:43.970 22:14:40 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73170 00:14:43.970 22:14:40 -- common/autotest_common.sh@936 -- # '[' -z 73170 ']' 00:14:43.970 22:14:40 -- common/autotest_common.sh@940 -- # kill -0 73170 00:14:43.970 22:14:40 -- common/autotest_common.sh@941 -- # uname 00:14:43.970 22:14:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:43.970 22:14:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73170 00:14:43.970 22:14:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:43.970 22:14:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:43.970 killing process with pid 73170 00:14:43.970 22:14:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73170' 00:14:43.970 22:14:40 -- common/autotest_common.sh@955 -- # kill 73170 00:14:43.970 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.970 00:14:43.970 Latency(us) 00:14:43.970 [2024-11-17T22:14:40.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.970 [2024-11-17T22:14:40.585Z] =================================================================================================================== 00:14:43.970 [2024-11-17T22:14:40.585Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.970 22:14:40 -- common/autotest_common.sh@960 -- # wait 73170 00:14:44.229 22:14:40 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:44.487 22:14:40 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:44.487 22:14:40 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:44.745 22:14:41 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:44.745 22:14:41 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:44.745 22:14:41 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:45.004 [2024-11-17 22:14:41.436289] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:45.004 22:14:41 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:45.004 22:14:41 -- common/autotest_common.sh@650 -- # local es=0 00:14:45.004 22:14:41 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:45.004 22:14:41 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.004 22:14:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.004 22:14:41 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.004 22:14:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.004 22:14:41 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.004 22:14:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.004 22:14:41 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.004 22:14:41 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:45.004 22:14:41 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:45.263 2024/11/17 22:14:41 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:45.263 request: 00:14:45.263 { 00:14:45.263 "method": "bdev_lvol_get_lvstores", 00:14:45.263 "params": { 00:14:45.263 "uuid": "4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9" 00:14:45.263 } 00:14:45.263 } 00:14:45.263 Got JSON-RPC error response 00:14:45.263 GoRPCClient: error on JSON-RPC call 00:14:45.263 22:14:41 -- common/autotest_common.sh@653 -- # es=1 00:14:45.263 22:14:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.263 22:14:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.263 22:14:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.263 22:14:41 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.522 aio_bdev 00:14:45.522 22:14:41 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev bdaded0f-7d6d-4c16-b697-aaf0d56eb281 00:14:45.522 22:14:41 -- common/autotest_common.sh@897 -- # local bdev_name=bdaded0f-7d6d-4c16-b697-aaf0d56eb281 00:14:45.522 22:14:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.522 22:14:41 -- common/autotest_common.sh@899 -- # local i 00:14:45.522 22:14:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.522 22:14:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.522 22:14:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:45.522 22:14:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bdaded0f-7d6d-4c16-b697-aaf0d56eb281 -t 2000 00:14:45.781 [ 00:14:45.781 { 00:14:45.781 "aliases": [ 00:14:45.781 "lvs/lvol" 00:14:45.781 ], 00:14:45.781 "assigned_rate_limits": { 00:14:45.781 "r_mbytes_per_sec": 0, 00:14:45.781 "rw_ios_per_sec": 0, 00:14:45.781 "rw_mbytes_per_sec": 0, 00:14:45.781 "w_mbytes_per_sec": 0 00:14:45.781 }, 00:14:45.781 "block_size": 4096, 00:14:45.781 "claimed": false, 00:14:45.781 "driver_specific": { 00:14:45.781 "lvol": { 00:14:45.781 "base_bdev": "aio_bdev", 00:14:45.781 "clone": false, 00:14:45.781 "esnap_clone": false, 00:14:45.781 "lvol_store_uuid": "4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9", 00:14:45.781 "snapshot": false, 00:14:45.781 "thin_provision": false 00:14:45.781 } 00:14:45.781 }, 00:14:45.781 "name": "bdaded0f-7d6d-4c16-b697-aaf0d56eb281", 00:14:45.781 "num_blocks": 38912, 00:14:45.781 "product_name": "Logical Volume", 00:14:45.781 "supported_io_types": { 00:14:45.781 "abort": false, 00:14:45.781 "compare": false, 00:14:45.781 "compare_and_write": false, 00:14:45.781 "flush": false, 00:14:45.781 "nvme_admin": false, 00:14:45.781 "nvme_io": false, 00:14:45.781 "read": true, 00:14:45.781 "reset": true, 00:14:45.781 "unmap": true, 00:14:45.781 "write": true, 00:14:45.781 "write_zeroes": true 00:14:45.781 }, 00:14:45.781 "uuid": "bdaded0f-7d6d-4c16-b697-aaf0d56eb281", 00:14:45.781 "zoned": false 00:14:45.781 } 00:14:45.781 ] 00:14:45.781 22:14:42 -- common/autotest_common.sh@905 -- # return 0 00:14:45.781 22:14:42 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:45.781 22:14:42 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:46.040 22:14:42 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:46.040 22:14:42 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:46.040 22:14:42 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:46.298 22:14:42 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:46.298 22:14:42 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bdaded0f-7d6d-4c16-b697-aaf0d56eb281 00:14:46.557 22:14:43 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a3975b3-e7a5-46c7-bf5b-3c38c87e48c9 00:14:46.815 22:14:43 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:47.074 22:14:43 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:47.332 ************************************ 00:14:47.332 END TEST lvs_grow_clean 00:14:47.332 ************************************ 00:14:47.332 00:14:47.332 real 0m18.152s 00:14:47.332 user 0m17.657s 00:14:47.332 sys 0m2.164s 00:14:47.332 22:14:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:47.332 22:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:47.332 22:14:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:47.332 22:14:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.332 22:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:47.332 ************************************ 00:14:47.332 START TEST lvs_grow_dirty 00:14:47.332 ************************************ 00:14:47.332 22:14:43 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:47.332 22:14:43 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:47.899 22:14:44 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:47.899 22:14:44 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:48.158 22:14:44 -- target/nvmf_lvs_grow.sh@28 -- # lvs=099ba5ee-7a94-4281-be13-041991ae36dd 00:14:48.158 22:14:44 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:14:48.158 22:14:44 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:48.158 22:14:44 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:48.158 22:14:44 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:48.158 22:14:44 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 099ba5ee-7a94-4281-be13-041991ae36dd lvol 150 00:14:48.725 22:14:45 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d02d31c4-a716-44d0-b098-4e85a49e43ef 00:14:48.725 22:14:45 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:48.725 22:14:45 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:48.725 [2024-11-17 22:14:45.301623] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:48.725 [2024-11-17 22:14:45.301717] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:48.725 true 00:14:48.725 22:14:45 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:14:48.725 22:14:45 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:48.984 22:14:45 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:48.984 22:14:45 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:49.242 22:14:45 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d02d31c4-a716-44d0-b098-4e85a49e43ef 00:14:49.501 22:14:45 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:49.759 22:14:46 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.018 22:14:46 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73595 00:14:50.018 22:14:46 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:50.018 22:14:46 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:50.018 22:14:46 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73595 /var/tmp/bdevperf.sock 00:14:50.018 22:14:46 -- common/autotest_common.sh@829 -- # '[' -z 73595 ']' 00:14:50.018 22:14:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.018 22:14:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.018 22:14:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.018 22:14:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.018 22:14:46 -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 [2024-11-17 22:14:46.499848] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:50.018 [2024-11-17 22:14:46.499950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73595 ] 00:14:50.277 [2024-11-17 22:14:46.633292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.277 [2024-11-17 22:14:46.763130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.845 22:14:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.845 22:14:47 -- common/autotest_common.sh@862 -- # return 0 00:14:50.845 22:14:47 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:51.414 Nvme0n1 00:14:51.414 22:14:47 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:51.414 [ 00:14:51.414 { 00:14:51.414 "aliases": [ 00:14:51.414 "d02d31c4-a716-44d0-b098-4e85a49e43ef" 00:14:51.414 ], 00:14:51.414 "assigned_rate_limits": { 00:14:51.414 "r_mbytes_per_sec": 0, 00:14:51.414 "rw_ios_per_sec": 0, 00:14:51.414 "rw_mbytes_per_sec": 0, 00:14:51.414 "w_mbytes_per_sec": 0 00:14:51.414 }, 00:14:51.414 "block_size": 4096, 00:14:51.414 "claimed": false, 00:14:51.414 "driver_specific": { 00:14:51.414 "mp_policy": "active_passive", 00:14:51.414 "nvme": [ 00:14:51.414 { 00:14:51.414 "ctrlr_data": { 00:14:51.414 "ana_reporting": false, 00:14:51.414 "cntlid": 1, 00:14:51.414 "firmware_revision": "24.01.1", 00:14:51.414 "model_number": "SPDK bdev Controller", 00:14:51.414 "multi_ctrlr": true, 00:14:51.414 "oacs": { 00:14:51.414 "firmware": 0, 00:14:51.414 "format": 0, 00:14:51.414 "ns_manage": 0, 00:14:51.414 "security": 0 00:14:51.414 }, 00:14:51.414 "serial_number": "SPDK0", 00:14:51.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:51.414 "vendor_id": "0x8086" 00:14:51.414 }, 00:14:51.414 "ns_data": { 00:14:51.414 "can_share": true, 00:14:51.414 "id": 1 00:14:51.414 }, 00:14:51.414 "trid": { 00:14:51.414 "adrfam": "IPv4", 00:14:51.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:51.414 "traddr": "10.0.0.2", 00:14:51.414 "trsvcid": "4420", 00:14:51.414 "trtype": "TCP" 00:14:51.414 }, 00:14:51.414 "vs": { 00:14:51.414 "nvme_version": "1.3" 00:14:51.414 } 00:14:51.414 } 00:14:51.414 ] 00:14:51.414 }, 00:14:51.414 "name": "Nvme0n1", 00:14:51.414 "num_blocks": 38912, 00:14:51.414 "product_name": "NVMe disk", 00:14:51.414 "supported_io_types": { 00:14:51.414 "abort": true, 00:14:51.414 "compare": true, 00:14:51.414 "compare_and_write": true, 00:14:51.414 "flush": true, 00:14:51.414 "nvme_admin": true, 00:14:51.414 "nvme_io": true, 00:14:51.414 "read": true, 00:14:51.414 "reset": true, 00:14:51.414 "unmap": true, 00:14:51.414 "write": true, 00:14:51.414 "write_zeroes": true 00:14:51.414 }, 00:14:51.414 "uuid": "d02d31c4-a716-44d0-b098-4e85a49e43ef", 00:14:51.414 "zoned": false 00:14:51.414 } 00:14:51.414 ] 00:14:51.414 22:14:47 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73647 00:14:51.414 22:14:47 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:51.414 22:14:47 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:51.673 Running I/O for 10 seconds... 00:14:52.609 Latency(us) 00:14:52.609 [2024-11-17T22:14:49.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.609 [2024-11-17T22:14:49.224Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.609 Nvme0n1 : 1.00 9289.00 36.29 0.00 0.00 0.00 0.00 0.00 00:14:52.609 [2024-11-17T22:14:49.224Z] =================================================================================================================== 00:14:52.609 [2024-11-17T22:14:49.224Z] Total : 9289.00 36.29 0.00 0.00 0.00 0.00 0.00 00:14:52.609 00:14:53.588 22:14:50 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:14:53.588 [2024-11-17T22:14:50.203Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.588 Nvme0n1 : 2.00 9461.50 36.96 0.00 0.00 0.00 0.00 0.00 00:14:53.588 [2024-11-17T22:14:50.203Z] =================================================================================================================== 00:14:53.588 [2024-11-17T22:14:50.203Z] Total : 9461.50 36.96 0.00 0.00 0.00 0.00 0.00 00:14:53.588 00:14:53.846 true 00:14:53.846 22:14:50 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:14:53.846 22:14:50 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:54.105 22:14:50 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:54.105 22:14:50 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:54.105 22:14:50 -- target/nvmf_lvs_grow.sh@65 -- # wait 73647 00:14:54.672 [2024-11-17T22:14:51.287Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.672 Nvme0n1 : 3.00 9456.67 36.94 0.00 0.00 0.00 0.00 0.00 00:14:54.672 [2024-11-17T22:14:51.287Z] =================================================================================================================== 00:14:54.672 [2024-11-17T22:14:51.287Z] Total : 9456.67 36.94 0.00 0.00 0.00 0.00 0.00 00:14:54.672 00:14:55.608 [2024-11-17T22:14:52.223Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.608 Nvme0n1 : 4.00 9455.00 36.93 0.00 0.00 0.00 0.00 0.00 00:14:55.608 [2024-11-17T22:14:52.223Z] =================================================================================================================== 00:14:55.608 [2024-11-17T22:14:52.223Z] Total : 9455.00 36.93 0.00 0.00 0.00 0.00 0.00 00:14:55.609 00:14:56.544 [2024-11-17T22:14:53.159Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.544 Nvme0n1 : 5.00 9422.80 36.81 0.00 0.00 0.00 0.00 0.00 00:14:56.544 [2024-11-17T22:14:53.159Z] =================================================================================================================== 00:14:56.544 [2024-11-17T22:14:53.159Z] Total : 9422.80 36.81 0.00 0.00 0.00 0.00 0.00 00:14:56.544 00:14:57.480 [2024-11-17T22:14:54.095Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.480 Nvme0n1 : 6.00 9405.33 36.74 0.00 0.00 0.00 0.00 0.00 00:14:57.480 [2024-11-17T22:14:54.095Z] =================================================================================================================== 00:14:57.480 [2024-11-17T22:14:54.095Z] Total : 9405.33 36.74 0.00 0.00 0.00 0.00 0.00 00:14:57.480 00:14:58.857 [2024-11-17T22:14:55.472Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.857 Nvme0n1 : 7.00 9303.00 36.34 0.00 0.00 0.00 0.00 0.00 00:14:58.857 [2024-11-17T22:14:55.472Z] =================================================================================================================== 00:14:58.857 [2024-11-17T22:14:55.472Z] Total : 9303.00 36.34 0.00 0.00 0.00 0.00 0.00 00:14:58.857 00:14:59.792 [2024-11-17T22:14:56.407Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.792 Nvme0n1 : 8.00 9302.50 36.34 0.00 0.00 0.00 0.00 0.00 00:14:59.792 [2024-11-17T22:14:56.407Z] =================================================================================================================== 00:14:59.792 [2024-11-17T22:14:56.407Z] Total : 9302.50 36.34 0.00 0.00 0.00 0.00 0.00 00:14:59.792 00:15:00.727 [2024-11-17T22:14:57.342Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.727 Nvme0n1 : 9.00 9295.22 36.31 0.00 0.00 0.00 0.00 0.00 00:15:00.727 [2024-11-17T22:14:57.342Z] =================================================================================================================== 00:15:00.727 [2024-11-17T22:14:57.342Z] Total : 9295.22 36.31 0.00 0.00 0.00 0.00 0.00 00:15:00.727 00:15:01.663 [2024-11-17T22:14:58.278Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.663 Nvme0n1 : 10.00 9290.70 36.29 0.00 0.00 0.00 0.00 0.00 00:15:01.663 [2024-11-17T22:14:58.278Z] =================================================================================================================== 00:15:01.663 [2024-11-17T22:14:58.278Z] Total : 9290.70 36.29 0.00 0.00 0.00 0.00 0.00 00:15:01.663 00:15:01.663 00:15:01.663 Latency(us) 00:15:01.663 [2024-11-17T22:14:58.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.663 [2024-11-17T22:14:58.278Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.663 Nvme0n1 : 10.01 9291.74 36.30 0.00 0.00 13771.83 6047.19 86745.83 00:15:01.663 [2024-11-17T22:14:58.278Z] =================================================================================================================== 00:15:01.663 [2024-11-17T22:14:58.278Z] Total : 9291.74 36.30 0.00 0.00 13771.83 6047.19 86745.83 00:15:01.663 0 00:15:01.663 22:14:58 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73595 00:15:01.663 22:14:58 -- common/autotest_common.sh@936 -- # '[' -z 73595 ']' 00:15:01.663 22:14:58 -- common/autotest_common.sh@940 -- # kill -0 73595 00:15:01.663 22:14:58 -- common/autotest_common.sh@941 -- # uname 00:15:01.663 22:14:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.663 22:14:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73595 00:15:01.663 22:14:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:01.663 22:14:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:01.663 killing process with pid 73595 00:15:01.663 22:14:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73595' 00:15:01.664 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.664 00:15:01.664 Latency(us) 00:15:01.664 [2024-11-17T22:14:58.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.664 [2024-11-17T22:14:58.279Z] =================================================================================================================== 00:15:01.664 [2024-11-17T22:14:58.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.664 22:14:58 -- common/autotest_common.sh@955 -- # kill 73595 00:15:01.664 22:14:58 -- common/autotest_common.sh@960 -- # wait 73595 00:15:01.922 22:14:58 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:02.181 22:14:58 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:15:02.181 22:14:58 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:02.439 22:14:59 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:02.439 22:14:59 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:02.439 22:14:59 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73003 00:15:02.439 22:14:59 -- target/nvmf_lvs_grow.sh@74 -- # wait 73003 00:15:02.698 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73003 Killed "${NVMF_APP[@]}" "$@" 00:15:02.698 22:14:59 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:02.698 22:14:59 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:02.698 22:14:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.698 22:14:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:02.698 22:14:59 -- common/autotest_common.sh@10 -- # set +x 00:15:02.698 22:14:59 -- nvmf/common.sh@469 -- # nvmfpid=73798 00:15:02.698 22:14:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:02.698 22:14:59 -- nvmf/common.sh@470 -- # waitforlisten 73798 00:15:02.698 22:14:59 -- common/autotest_common.sh@829 -- # '[' -z 73798 ']' 00:15:02.698 22:14:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.698 22:14:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.698 22:14:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.698 22:14:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.698 22:14:59 -- common/autotest_common.sh@10 -- # set +x 00:15:02.698 [2024-11-17 22:14:59.123989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:02.698 [2024-11-17 22:14:59.124067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.698 [2024-11-17 22:14:59.250391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.957 [2024-11-17 22:14:59.350667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:02.957 [2024-11-17 22:14:59.350833] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.957 [2024-11-17 22:14:59.350846] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.957 [2024-11-17 22:14:59.350854] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.957 [2024-11-17 22:14:59.350893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.524 22:15:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.524 22:15:00 -- common/autotest_common.sh@862 -- # return 0 00:15:03.524 22:15:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:03.524 22:15:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:03.524 22:15:00 -- common/autotest_common.sh@10 -- # set +x 00:15:03.524 22:15:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.524 22:15:00 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:03.783 [2024-11-17 22:15:00.339716] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:03.783 [2024-11-17 22:15:00.340091] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:03.783 [2024-11-17 22:15:00.340235] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:03.783 22:15:00 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:03.783 22:15:00 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev d02d31c4-a716-44d0-b098-4e85a49e43ef 00:15:03.783 22:15:00 -- common/autotest_common.sh@897 -- # local bdev_name=d02d31c4-a716-44d0-b098-4e85a49e43ef 00:15:03.783 22:15:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:03.783 22:15:00 -- common/autotest_common.sh@899 -- # local i 00:15:03.783 22:15:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:03.783 22:15:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:03.783 22:15:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:04.041 22:15:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d02d31c4-a716-44d0-b098-4e85a49e43ef -t 2000 00:15:04.300 [ 00:15:04.300 { 00:15:04.300 "aliases": [ 00:15:04.300 "lvs/lvol" 00:15:04.300 ], 00:15:04.300 "assigned_rate_limits": { 00:15:04.300 "r_mbytes_per_sec": 0, 00:15:04.300 "rw_ios_per_sec": 0, 00:15:04.300 "rw_mbytes_per_sec": 0, 00:15:04.300 "w_mbytes_per_sec": 0 00:15:04.300 }, 00:15:04.300 "block_size": 4096, 00:15:04.300 "claimed": false, 00:15:04.300 "driver_specific": { 00:15:04.300 "lvol": { 00:15:04.300 "base_bdev": "aio_bdev", 00:15:04.300 "clone": false, 00:15:04.300 "esnap_clone": false, 00:15:04.300 "lvol_store_uuid": "099ba5ee-7a94-4281-be13-041991ae36dd", 00:15:04.300 "snapshot": false, 00:15:04.300 "thin_provision": false 00:15:04.300 } 00:15:04.300 }, 00:15:04.300 "name": "d02d31c4-a716-44d0-b098-4e85a49e43ef", 00:15:04.300 "num_blocks": 38912, 00:15:04.300 "product_name": "Logical Volume", 00:15:04.300 "supported_io_types": { 00:15:04.300 "abort": false, 00:15:04.300 "compare": false, 00:15:04.300 "compare_and_write": false, 00:15:04.300 "flush": false, 00:15:04.300 "nvme_admin": false, 00:15:04.300 "nvme_io": false, 00:15:04.300 "read": true, 00:15:04.300 "reset": true, 00:15:04.300 "unmap": true, 00:15:04.300 "write": true, 00:15:04.300 "write_zeroes": true 00:15:04.300 }, 00:15:04.300 "uuid": "d02d31c4-a716-44d0-b098-4e85a49e43ef", 00:15:04.300 "zoned": false 00:15:04.300 } 00:15:04.300 ] 00:15:04.300 22:15:00 -- common/autotest_common.sh@905 -- # return 0 00:15:04.300 22:15:00 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:15:04.300 22:15:00 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:04.558 22:15:01 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:04.558 22:15:01 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:15:04.558 22:15:01 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:04.817 22:15:01 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:04.817 22:15:01 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:05.075 [2024-11-17 22:15:01.520839] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:05.075 22:15:01 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:15:05.075 22:15:01 -- common/autotest_common.sh@650 -- # local es=0 00:15:05.075 22:15:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:15:05.075 22:15:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.075 22:15:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.075 22:15:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.075 22:15:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.075 22:15:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.075 22:15:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.075 22:15:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.075 22:15:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:05.075 22:15:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:15:05.334 2024/11/17 22:15:01 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:099ba5ee-7a94-4281-be13-041991ae36dd], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:05.334 request: 00:15:05.334 { 00:15:05.334 "method": "bdev_lvol_get_lvstores", 00:15:05.334 "params": { 00:15:05.334 "uuid": "099ba5ee-7a94-4281-be13-041991ae36dd" 00:15:05.334 } 00:15:05.334 } 00:15:05.334 Got JSON-RPC error response 00:15:05.334 GoRPCClient: error on JSON-RPC call 00:15:05.334 22:15:01 -- common/autotest_common.sh@653 -- # es=1 00:15:05.334 22:15:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:05.334 22:15:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:05.334 22:15:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:05.334 22:15:01 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:05.593 aio_bdev 00:15:05.593 22:15:02 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d02d31c4-a716-44d0-b098-4e85a49e43ef 00:15:05.593 22:15:02 -- common/autotest_common.sh@897 -- # local bdev_name=d02d31c4-a716-44d0-b098-4e85a49e43ef 00:15:05.593 22:15:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:05.593 22:15:02 -- common/autotest_common.sh@899 -- # local i 00:15:05.593 22:15:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:05.593 22:15:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:05.593 22:15:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:05.851 22:15:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d02d31c4-a716-44d0-b098-4e85a49e43ef -t 2000 00:15:06.111 [ 00:15:06.111 { 00:15:06.111 "aliases": [ 00:15:06.111 "lvs/lvol" 00:15:06.111 ], 00:15:06.111 "assigned_rate_limits": { 00:15:06.111 "r_mbytes_per_sec": 0, 00:15:06.111 "rw_ios_per_sec": 0, 00:15:06.111 "rw_mbytes_per_sec": 0, 00:15:06.111 "w_mbytes_per_sec": 0 00:15:06.111 }, 00:15:06.111 "block_size": 4096, 00:15:06.111 "claimed": false, 00:15:06.111 "driver_specific": { 00:15:06.111 "lvol": { 00:15:06.111 "base_bdev": "aio_bdev", 00:15:06.111 "clone": false, 00:15:06.111 "esnap_clone": false, 00:15:06.111 "lvol_store_uuid": "099ba5ee-7a94-4281-be13-041991ae36dd", 00:15:06.111 "snapshot": false, 00:15:06.111 "thin_provision": false 00:15:06.111 } 00:15:06.111 }, 00:15:06.111 "name": "d02d31c4-a716-44d0-b098-4e85a49e43ef", 00:15:06.111 "num_blocks": 38912, 00:15:06.111 "product_name": "Logical Volume", 00:15:06.111 "supported_io_types": { 00:15:06.111 "abort": false, 00:15:06.111 "compare": false, 00:15:06.111 "compare_and_write": false, 00:15:06.111 "flush": false, 00:15:06.111 "nvme_admin": false, 00:15:06.111 "nvme_io": false, 00:15:06.111 "read": true, 00:15:06.111 "reset": true, 00:15:06.111 "unmap": true, 00:15:06.111 "write": true, 00:15:06.111 "write_zeroes": true 00:15:06.111 }, 00:15:06.111 "uuid": "d02d31c4-a716-44d0-b098-4e85a49e43ef", 00:15:06.111 "zoned": false 00:15:06.111 } 00:15:06.111 ] 00:15:06.111 22:15:02 -- common/autotest_common.sh@905 -- # return 0 00:15:06.111 22:15:02 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:06.111 22:15:02 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:15:06.370 22:15:02 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:06.370 22:15:02 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:15:06.370 22:15:02 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:06.628 22:15:03 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:06.628 22:15:03 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d02d31c4-a716-44d0-b098-4e85a49e43ef 00:15:06.888 22:15:03 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 099ba5ee-7a94-4281-be13-041991ae36dd 00:15:07.147 22:15:03 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:07.407 22:15:03 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:07.975 ************************************ 00:15:07.975 END TEST lvs_grow_dirty 00:15:07.975 ************************************ 00:15:07.975 00:15:07.975 real 0m20.491s 00:15:07.975 user 0m41.600s 00:15:07.975 sys 0m8.349s 00:15:07.975 22:15:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:07.975 22:15:04 -- common/autotest_common.sh@10 -- # set +x 00:15:07.975 22:15:04 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:07.975 22:15:04 -- common/autotest_common.sh@806 -- # type=--id 00:15:07.975 22:15:04 -- common/autotest_common.sh@807 -- # id=0 00:15:07.975 22:15:04 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:07.975 22:15:04 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:07.975 22:15:04 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:07.975 22:15:04 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:07.975 22:15:04 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:07.975 22:15:04 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:07.975 nvmf_trace.0 00:15:07.975 22:15:04 -- common/autotest_common.sh@821 -- # return 0 00:15:07.975 22:15:04 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:07.975 22:15:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:07.975 22:15:04 -- nvmf/common.sh@116 -- # sync 00:15:08.543 22:15:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:08.543 22:15:04 -- nvmf/common.sh@119 -- # set +e 00:15:08.543 22:15:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:08.543 22:15:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:08.543 rmmod nvme_tcp 00:15:08.543 rmmod nvme_fabrics 00:15:08.543 rmmod nvme_keyring 00:15:08.543 22:15:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:08.543 22:15:05 -- nvmf/common.sh@123 -- # set -e 00:15:08.543 22:15:05 -- nvmf/common.sh@124 -- # return 0 00:15:08.543 22:15:05 -- nvmf/common.sh@477 -- # '[' -n 73798 ']' 00:15:08.543 22:15:05 -- nvmf/common.sh@478 -- # killprocess 73798 00:15:08.543 22:15:05 -- common/autotest_common.sh@936 -- # '[' -z 73798 ']' 00:15:08.543 22:15:05 -- common/autotest_common.sh@940 -- # kill -0 73798 00:15:08.543 22:15:05 -- common/autotest_common.sh@941 -- # uname 00:15:08.543 22:15:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.543 22:15:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73798 00:15:08.543 22:15:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:08.543 22:15:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:08.543 killing process with pid 73798 00:15:08.543 22:15:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73798' 00:15:08.543 22:15:05 -- common/autotest_common.sh@955 -- # kill 73798 00:15:08.543 22:15:05 -- common/autotest_common.sh@960 -- # wait 73798 00:15:08.802 22:15:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:08.802 22:15:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:08.802 22:15:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:08.802 22:15:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.802 22:15:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:08.802 22:15:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.802 22:15:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.802 22:15:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.802 22:15:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:09.061 00:15:09.061 real 0m41.652s 00:15:09.061 user 1m6.218s 00:15:09.061 sys 0m11.528s 00:15:09.061 22:15:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:09.061 ************************************ 00:15:09.061 END TEST nvmf_lvs_grow 00:15:09.061 22:15:05 -- common/autotest_common.sh@10 -- # set +x 00:15:09.061 ************************************ 00:15:09.061 22:15:05 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:09.061 22:15:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:09.061 22:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:09.061 22:15:05 -- common/autotest_common.sh@10 -- # set +x 00:15:09.061 ************************************ 00:15:09.061 START TEST nvmf_bdev_io_wait 00:15:09.061 ************************************ 00:15:09.061 22:15:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:09.061 * Looking for test storage... 00:15:09.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:09.061 22:15:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:09.061 22:15:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:09.061 22:15:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:09.061 22:15:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:09.061 22:15:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:09.061 22:15:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:09.061 22:15:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:09.061 22:15:05 -- scripts/common.sh@335 -- # IFS=.-: 00:15:09.061 22:15:05 -- scripts/common.sh@335 -- # read -ra ver1 00:15:09.061 22:15:05 -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.061 22:15:05 -- scripts/common.sh@336 -- # read -ra ver2 00:15:09.061 22:15:05 -- scripts/common.sh@337 -- # local 'op=<' 00:15:09.061 22:15:05 -- scripts/common.sh@339 -- # ver1_l=2 00:15:09.061 22:15:05 -- scripts/common.sh@340 -- # ver2_l=1 00:15:09.061 22:15:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:09.061 22:15:05 -- scripts/common.sh@343 -- # case "$op" in 00:15:09.061 22:15:05 -- scripts/common.sh@344 -- # : 1 00:15:09.061 22:15:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:09.061 22:15:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.061 22:15:05 -- scripts/common.sh@364 -- # decimal 1 00:15:09.061 22:15:05 -- scripts/common.sh@352 -- # local d=1 00:15:09.061 22:15:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.061 22:15:05 -- scripts/common.sh@354 -- # echo 1 00:15:09.061 22:15:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:09.061 22:15:05 -- scripts/common.sh@365 -- # decimal 2 00:15:09.061 22:15:05 -- scripts/common.sh@352 -- # local d=2 00:15:09.061 22:15:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.061 22:15:05 -- scripts/common.sh@354 -- # echo 2 00:15:09.061 22:15:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:09.061 22:15:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:09.061 22:15:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:09.061 22:15:05 -- scripts/common.sh@367 -- # return 0 00:15:09.061 22:15:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.061 22:15:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:09.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.061 --rc genhtml_branch_coverage=1 00:15:09.061 --rc genhtml_function_coverage=1 00:15:09.061 --rc genhtml_legend=1 00:15:09.061 --rc geninfo_all_blocks=1 00:15:09.061 --rc geninfo_unexecuted_blocks=1 00:15:09.061 00:15:09.061 ' 00:15:09.061 22:15:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:09.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.061 --rc genhtml_branch_coverage=1 00:15:09.061 --rc genhtml_function_coverage=1 00:15:09.061 --rc genhtml_legend=1 00:15:09.061 --rc geninfo_all_blocks=1 00:15:09.061 --rc geninfo_unexecuted_blocks=1 00:15:09.062 00:15:09.062 ' 00:15:09.062 22:15:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:09.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.062 --rc genhtml_branch_coverage=1 00:15:09.062 --rc genhtml_function_coverage=1 00:15:09.062 --rc genhtml_legend=1 00:15:09.062 --rc geninfo_all_blocks=1 00:15:09.062 --rc geninfo_unexecuted_blocks=1 00:15:09.062 00:15:09.062 ' 00:15:09.062 22:15:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:09.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.062 --rc genhtml_branch_coverage=1 00:15:09.062 --rc genhtml_function_coverage=1 00:15:09.062 --rc genhtml_legend=1 00:15:09.062 --rc geninfo_all_blocks=1 00:15:09.062 --rc geninfo_unexecuted_blocks=1 00:15:09.062 00:15:09.062 ' 00:15:09.062 22:15:05 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:09.062 22:15:05 -- nvmf/common.sh@7 -- # uname -s 00:15:09.062 22:15:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.062 22:15:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.062 22:15:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.062 22:15:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.062 22:15:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.062 22:15:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.062 22:15:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.062 22:15:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.062 22:15:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.062 22:15:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.321 22:15:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:15:09.321 22:15:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:15:09.321 22:15:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.321 22:15:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.321 22:15:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:09.321 22:15:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.321 22:15:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.321 22:15:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.321 22:15:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.321 22:15:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.321 22:15:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.321 22:15:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.321 22:15:05 -- paths/export.sh@5 -- # export PATH 00:15:09.321 22:15:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.321 22:15:05 -- nvmf/common.sh@46 -- # : 0 00:15:09.321 22:15:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:09.321 22:15:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:09.321 22:15:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:09.321 22:15:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.321 22:15:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.321 22:15:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:09.321 22:15:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:09.321 22:15:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:09.321 22:15:05 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.321 22:15:05 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.321 22:15:05 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:09.321 22:15:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:09.321 22:15:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.321 22:15:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:09.321 22:15:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:09.321 22:15:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:09.321 22:15:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.321 22:15:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.321 22:15:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.321 22:15:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:09.321 22:15:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:09.321 22:15:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:09.321 22:15:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:09.321 22:15:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:09.321 22:15:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:09.321 22:15:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.321 22:15:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.321 22:15:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:09.321 22:15:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:09.321 22:15:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:09.321 22:15:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:09.321 22:15:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:09.321 22:15:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.321 22:15:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:09.321 22:15:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:09.321 22:15:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:09.321 22:15:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:09.321 22:15:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:09.321 22:15:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:09.322 Cannot find device "nvmf_tgt_br" 00:15:09.322 22:15:05 -- nvmf/common.sh@154 -- # true 00:15:09.322 22:15:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:09.322 Cannot find device "nvmf_tgt_br2" 00:15:09.322 22:15:05 -- nvmf/common.sh@155 -- # true 00:15:09.322 22:15:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:09.322 22:15:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:09.322 Cannot find device "nvmf_tgt_br" 00:15:09.322 22:15:05 -- nvmf/common.sh@157 -- # true 00:15:09.322 22:15:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:09.322 Cannot find device "nvmf_tgt_br2" 00:15:09.322 22:15:05 -- nvmf/common.sh@158 -- # true 00:15:09.322 22:15:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:09.322 22:15:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:09.322 22:15:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:09.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.322 22:15:05 -- nvmf/common.sh@161 -- # true 00:15:09.322 22:15:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:09.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.322 22:15:05 -- nvmf/common.sh@162 -- # true 00:15:09.322 22:15:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:09.322 22:15:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:09.322 22:15:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:09.322 22:15:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:09.322 22:15:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:09.322 22:15:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:09.322 22:15:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:09.322 22:15:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:09.322 22:15:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:09.322 22:15:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:09.322 22:15:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:09.322 22:15:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:09.322 22:15:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:09.322 22:15:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.322 22:15:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:09.322 22:15:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:09.322 22:15:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:09.322 22:15:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:09.582 22:15:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:09.582 22:15:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:09.582 22:15:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:09.582 22:15:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:09.582 22:15:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:09.582 22:15:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:09.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:15:09.582 00:15:09.582 --- 10.0.0.2 ping statistics --- 00:15:09.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.582 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:15:09.582 22:15:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:09.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:09.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:09.582 00:15:09.582 --- 10.0.0.3 ping statistics --- 00:15:09.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.582 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:09.582 22:15:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:09.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:09.582 00:15:09.582 --- 10.0.0.1 ping statistics --- 00:15:09.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.582 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:09.582 22:15:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.582 22:15:06 -- nvmf/common.sh@421 -- # return 0 00:15:09.582 22:15:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:09.582 22:15:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.582 22:15:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:09.582 22:15:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:09.582 22:15:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.582 22:15:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:09.582 22:15:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:09.582 22:15:06 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:09.582 22:15:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:09.582 22:15:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.582 22:15:06 -- common/autotest_common.sh@10 -- # set +x 00:15:09.582 22:15:06 -- nvmf/common.sh@469 -- # nvmfpid=74218 00:15:09.582 22:15:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:09.582 22:15:06 -- nvmf/common.sh@470 -- # waitforlisten 74218 00:15:09.582 22:15:06 -- common/autotest_common.sh@829 -- # '[' -z 74218 ']' 00:15:09.582 22:15:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.582 22:15:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.582 22:15:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.582 22:15:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.582 22:15:06 -- common/autotest_common.sh@10 -- # set +x 00:15:09.582 [2024-11-17 22:15:06.106666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:09.582 [2024-11-17 22:15:06.106758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.842 [2024-11-17 22:15:06.245392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.842 [2024-11-17 22:15:06.351214] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:09.842 [2024-11-17 22:15:06.351354] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.842 [2024-11-17 22:15:06.351366] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.842 [2024-11-17 22:15:06.351375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.842 [2024-11-17 22:15:06.351526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.842 [2024-11-17 22:15:06.351978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.842 [2024-11-17 22:15:06.352095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.842 [2024-11-17 22:15:06.352100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.452 22:15:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.452 22:15:07 -- common/autotest_common.sh@862 -- # return 0 00:15:10.452 22:15:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:10.452 22:15:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.452 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.452 22:15:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.452 22:15:07 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:10.452 22:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.452 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.452 22:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.452 22:15:07 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:10.452 22:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.452 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.712 22:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:10.712 22:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.712 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.712 [2024-11-17 22:15:07.159677] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.712 22:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:10.712 22:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.712 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.712 Malloc0 00:15:10.712 22:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:10.712 22:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.712 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.712 22:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.712 22:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.712 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.712 22:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.712 22:15:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.712 22:15:07 -- common/autotest_common.sh@10 -- # set +x 00:15:10.712 [2024-11-17 22:15:07.227222] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.712 22:15:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74277 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:10.712 22:15:07 -- nvmf/common.sh@520 -- # config=() 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@30 -- # READ_PID=74279 00:15:10.712 22:15:07 -- nvmf/common.sh@520 -- # local subsystem config 00:15:10.712 22:15:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:10.712 22:15:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:10.712 { 00:15:10.712 "params": { 00:15:10.712 "name": "Nvme$subsystem", 00:15:10.712 "trtype": "$TEST_TRANSPORT", 00:15:10.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.712 "adrfam": "ipv4", 00:15:10.712 "trsvcid": "$NVMF_PORT", 00:15:10.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.712 "hdgst": ${hdgst:-false}, 00:15:10.712 "ddgst": ${ddgst:-false} 00:15:10.712 }, 00:15:10.712 "method": "bdev_nvme_attach_controller" 00:15:10.712 } 00:15:10.712 EOF 00:15:10.712 )") 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74281 00:15:10.712 22:15:07 -- nvmf/common.sh@520 -- # config=() 00:15:10.712 22:15:07 -- nvmf/common.sh@520 -- # local subsystem config 00:15:10.712 22:15:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:10.712 22:15:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:10.712 { 00:15:10.712 "params": { 00:15:10.712 "name": "Nvme$subsystem", 00:15:10.712 "trtype": "$TEST_TRANSPORT", 00:15:10.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.712 "adrfam": "ipv4", 00:15:10.712 "trsvcid": "$NVMF_PORT", 00:15:10.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.712 "hdgst": ${hdgst:-false}, 00:15:10.712 "ddgst": ${ddgst:-false} 00:15:10.712 }, 00:15:10.712 "method": "bdev_nvme_attach_controller" 00:15:10.712 } 00:15:10.712 EOF 00:15:10.712 )") 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74284 00:15:10.712 22:15:07 -- nvmf/common.sh@542 -- # cat 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@35 -- # sync 00:15:10.712 22:15:07 -- nvmf/common.sh@542 -- # cat 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:10.712 22:15:07 -- nvmf/common.sh@520 -- # config=() 00:15:10.712 22:15:07 -- nvmf/common.sh@520 -- # local subsystem config 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:10.712 22:15:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:10.712 22:15:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:10.712 { 00:15:10.712 "params": { 00:15:10.712 "name": "Nvme$subsystem", 00:15:10.712 "trtype": "$TEST_TRANSPORT", 00:15:10.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.712 "adrfam": "ipv4", 00:15:10.712 "trsvcid": "$NVMF_PORT", 00:15:10.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.712 "hdgst": ${hdgst:-false}, 00:15:10.712 "ddgst": ${ddgst:-false} 00:15:10.712 }, 00:15:10.712 "method": "bdev_nvme_attach_controller" 00:15:10.712 } 00:15:10.712 EOF 00:15:10.712 )") 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:10.712 22:15:07 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:10.712 22:15:07 -- nvmf/common.sh@520 -- # config=() 00:15:10.712 22:15:07 -- nvmf/common.sh@520 -- # local subsystem config 00:15:10.712 22:15:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:10.712 22:15:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:10.712 { 00:15:10.712 "params": { 00:15:10.712 "name": "Nvme$subsystem", 00:15:10.712 "trtype": "$TEST_TRANSPORT", 00:15:10.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.712 "adrfam": "ipv4", 00:15:10.712 "trsvcid": "$NVMF_PORT", 00:15:10.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.713 "hdgst": ${hdgst:-false}, 00:15:10.713 "ddgst": ${ddgst:-false} 00:15:10.713 }, 00:15:10.713 "method": "bdev_nvme_attach_controller" 00:15:10.713 } 00:15:10.713 EOF 00:15:10.713 )") 00:15:10.713 22:15:07 -- nvmf/common.sh@544 -- # jq . 00:15:10.713 22:15:07 -- nvmf/common.sh@542 -- # cat 00:15:10.713 22:15:07 -- nvmf/common.sh@544 -- # jq . 00:15:10.713 22:15:07 -- nvmf/common.sh@542 -- # cat 00:15:10.713 22:15:07 -- nvmf/common.sh@545 -- # IFS=, 00:15:10.713 22:15:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:10.713 "params": { 00:15:10.713 "name": "Nvme1", 00:15:10.713 "trtype": "tcp", 00:15:10.713 "traddr": "10.0.0.2", 00:15:10.713 "adrfam": "ipv4", 00:15:10.713 "trsvcid": "4420", 00:15:10.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.713 "hdgst": false, 00:15:10.713 "ddgst": false 00:15:10.713 }, 00:15:10.713 "method": "bdev_nvme_attach_controller" 00:15:10.713 }' 00:15:10.713 22:15:07 -- nvmf/common.sh@545 -- # IFS=, 00:15:10.713 22:15:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:10.713 "params": { 00:15:10.713 "name": "Nvme1", 00:15:10.713 "trtype": "tcp", 00:15:10.713 "traddr": "10.0.0.2", 00:15:10.713 "adrfam": "ipv4", 00:15:10.713 "trsvcid": "4420", 00:15:10.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.713 "hdgst": false, 00:15:10.713 "ddgst": false 00:15:10.713 }, 00:15:10.713 "method": "bdev_nvme_attach_controller" 00:15:10.713 }' 00:15:10.713 22:15:07 -- nvmf/common.sh@544 -- # jq . 00:15:10.713 22:15:07 -- nvmf/common.sh@545 -- # IFS=, 00:15:10.713 22:15:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:10.713 "params": { 00:15:10.713 "name": "Nvme1", 00:15:10.713 "trtype": "tcp", 00:15:10.713 "traddr": "10.0.0.2", 00:15:10.713 "adrfam": "ipv4", 00:15:10.713 "trsvcid": "4420", 00:15:10.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.713 "hdgst": false, 00:15:10.713 "ddgst": false 00:15:10.713 }, 00:15:10.713 "method": "bdev_nvme_attach_controller" 00:15:10.713 }' 00:15:10.713 22:15:07 -- nvmf/common.sh@544 -- # jq . 00:15:10.713 22:15:07 -- nvmf/common.sh@545 -- # IFS=, 00:15:10.713 22:15:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:10.713 "params": { 00:15:10.713 "name": "Nvme1", 00:15:10.713 "trtype": "tcp", 00:15:10.713 "traddr": "10.0.0.2", 00:15:10.713 "adrfam": "ipv4", 00:15:10.713 "trsvcid": "4420", 00:15:10.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.713 "hdgst": false, 00:15:10.713 "ddgst": false 00:15:10.713 }, 00:15:10.713 "method": "bdev_nvme_attach_controller" 00:15:10.713 }' 00:15:10.713 [2024-11-17 22:15:07.303156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:10.713 [2024-11-17 22:15:07.303240] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:10.713 22:15:07 -- target/bdev_io_wait.sh@37 -- # wait 74277 00:15:10.713 [2024-11-17 22:15:07.316569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:10.713 [2024-11-17 22:15:07.316652] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:10.713 [2024-11-17 22:15:07.321986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:10.713 [2024-11-17 22:15:07.322067] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:10.972 [2024-11-17 22:15:07.339869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:10.972 [2024-11-17 22:15:07.339974] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:10.973 [2024-11-17 22:15:07.548354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.232 [2024-11-17 22:15:07.622260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.232 [2024-11-17 22:15:07.671180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:11.232 [2024-11-17 22:15:07.699813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.232 [2024-11-17 22:15:07.709896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:11.232 [2024-11-17 22:15:07.781689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.232 [2024-11-17 22:15:07.803518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:11.232 Running I/O for 1 seconds... 00:15:11.490 Running I/O for 1 seconds... 00:15:11.490 [2024-11-17 22:15:07.884226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:11.490 Running I/O for 1 seconds... 00:15:11.490 Running I/O for 1 seconds... 00:15:12.427 00:15:12.427 Latency(us) 00:15:12.427 [2024-11-17T22:15:09.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.427 [2024-11-17T22:15:09.042Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:12.427 Nvme1n1 : 1.03 4129.70 16.13 0.00 0.00 30589.42 10724.07 45041.11 00:15:12.427 [2024-11-17T22:15:09.042Z] =================================================================================================================== 00:15:12.427 [2024-11-17T22:15:09.042Z] Total : 4129.70 16.13 0.00 0.00 30589.42 10724.07 45041.11 00:15:12.427 00:15:12.427 Latency(us) 00:15:12.427 [2024-11-17T22:15:09.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.427 [2024-11-17T22:15:09.042Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:12.427 Nvme1n1 : 1.01 8143.16 31.81 0.00 0.00 15645.89 7864.32 28716.68 00:15:12.427 [2024-11-17T22:15:09.042Z] =================================================================================================================== 00:15:12.427 [2024-11-17T22:15:09.042Z] Total : 8143.16 31.81 0.00 0.00 15645.89 7864.32 28716.68 00:15:12.427 00:15:12.427 Latency(us) 00:15:12.427 [2024-11-17T22:15:09.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.427 [2024-11-17T22:15:09.042Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:12.427 Nvme1n1 : 1.01 4274.67 16.70 0.00 0.00 29810.13 8460.10 63391.19 00:15:12.427 [2024-11-17T22:15:09.042Z] =================================================================================================================== 00:15:12.427 [2024-11-17T22:15:09.042Z] Total : 4274.67 16.70 0.00 0.00 29810.13 8460.10 63391.19 00:15:12.427 00:15:12.427 Latency(us) 00:15:12.427 [2024-11-17T22:15:09.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.427 [2024-11-17T22:15:09.042Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:12.427 Nvme1n1 : 1.00 231248.18 903.31 0.00 0.00 551.26 224.35 848.99 00:15:12.428 [2024-11-17T22:15:09.043Z] =================================================================================================================== 00:15:12.428 [2024-11-17T22:15:09.043Z] Total : 231248.18 903.31 0.00 0.00 551.26 224.35 848.99 00:15:12.687 22:15:09 -- target/bdev_io_wait.sh@38 -- # wait 74279 00:15:12.946 22:15:09 -- target/bdev_io_wait.sh@39 -- # wait 74281 00:15:12.946 22:15:09 -- target/bdev_io_wait.sh@40 -- # wait 74284 00:15:12.946 22:15:09 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.946 22:15:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.946 22:15:09 -- common/autotest_common.sh@10 -- # set +x 00:15:12.946 22:15:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.946 22:15:09 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:12.946 22:15:09 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:12.946 22:15:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:12.946 22:15:09 -- nvmf/common.sh@116 -- # sync 00:15:12.946 22:15:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:12.946 22:15:09 -- nvmf/common.sh@119 -- # set +e 00:15:12.946 22:15:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:12.946 22:15:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:12.946 rmmod nvme_tcp 00:15:12.946 rmmod nvme_fabrics 00:15:12.946 rmmod nvme_keyring 00:15:12.946 22:15:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:12.946 22:15:09 -- nvmf/common.sh@123 -- # set -e 00:15:12.946 22:15:09 -- nvmf/common.sh@124 -- # return 0 00:15:12.946 22:15:09 -- nvmf/common.sh@477 -- # '[' -n 74218 ']' 00:15:12.946 22:15:09 -- nvmf/common.sh@478 -- # killprocess 74218 00:15:12.946 22:15:09 -- common/autotest_common.sh@936 -- # '[' -z 74218 ']' 00:15:12.946 22:15:09 -- common/autotest_common.sh@940 -- # kill -0 74218 00:15:12.946 22:15:09 -- common/autotest_common.sh@941 -- # uname 00:15:12.946 22:15:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.946 22:15:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74218 00:15:12.946 22:15:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:12.946 22:15:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:12.946 killing process with pid 74218 00:15:12.946 22:15:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74218' 00:15:12.946 22:15:09 -- common/autotest_common.sh@955 -- # kill 74218 00:15:12.946 22:15:09 -- common/autotest_common.sh@960 -- # wait 74218 00:15:13.204 22:15:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:13.204 22:15:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:13.204 22:15:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:13.204 22:15:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.204 22:15:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:13.204 22:15:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.204 22:15:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.204 22:15:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.204 22:15:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:13.464 00:15:13.464 real 0m4.341s 00:15:13.464 user 0m19.150s 00:15:13.464 sys 0m1.946s 00:15:13.464 22:15:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:13.464 22:15:09 -- common/autotest_common.sh@10 -- # set +x 00:15:13.464 ************************************ 00:15:13.464 END TEST nvmf_bdev_io_wait 00:15:13.464 ************************************ 00:15:13.464 22:15:09 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:13.464 22:15:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:13.464 22:15:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.464 22:15:09 -- common/autotest_common.sh@10 -- # set +x 00:15:13.464 ************************************ 00:15:13.464 START TEST nvmf_queue_depth 00:15:13.464 ************************************ 00:15:13.464 22:15:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:13.464 * Looking for test storage... 00:15:13.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:13.464 22:15:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:13.464 22:15:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:13.464 22:15:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:13.464 22:15:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:13.464 22:15:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:13.464 22:15:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:13.464 22:15:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:13.464 22:15:10 -- scripts/common.sh@335 -- # IFS=.-: 00:15:13.464 22:15:10 -- scripts/common.sh@335 -- # read -ra ver1 00:15:13.464 22:15:10 -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.464 22:15:10 -- scripts/common.sh@336 -- # read -ra ver2 00:15:13.464 22:15:10 -- scripts/common.sh@337 -- # local 'op=<' 00:15:13.464 22:15:10 -- scripts/common.sh@339 -- # ver1_l=2 00:15:13.464 22:15:10 -- scripts/common.sh@340 -- # ver2_l=1 00:15:13.464 22:15:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:13.464 22:15:10 -- scripts/common.sh@343 -- # case "$op" in 00:15:13.464 22:15:10 -- scripts/common.sh@344 -- # : 1 00:15:13.464 22:15:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:13.464 22:15:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.464 22:15:10 -- scripts/common.sh@364 -- # decimal 1 00:15:13.464 22:15:10 -- scripts/common.sh@352 -- # local d=1 00:15:13.464 22:15:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.464 22:15:10 -- scripts/common.sh@354 -- # echo 1 00:15:13.464 22:15:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:13.464 22:15:10 -- scripts/common.sh@365 -- # decimal 2 00:15:13.464 22:15:10 -- scripts/common.sh@352 -- # local d=2 00:15:13.464 22:15:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.464 22:15:10 -- scripts/common.sh@354 -- # echo 2 00:15:13.464 22:15:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:13.464 22:15:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:13.464 22:15:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:13.464 22:15:10 -- scripts/common.sh@367 -- # return 0 00:15:13.464 22:15:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.464 22:15:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:13.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.464 --rc genhtml_branch_coverage=1 00:15:13.464 --rc genhtml_function_coverage=1 00:15:13.464 --rc genhtml_legend=1 00:15:13.464 --rc geninfo_all_blocks=1 00:15:13.464 --rc geninfo_unexecuted_blocks=1 00:15:13.464 00:15:13.464 ' 00:15:13.464 22:15:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:13.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.464 --rc genhtml_branch_coverage=1 00:15:13.464 --rc genhtml_function_coverage=1 00:15:13.464 --rc genhtml_legend=1 00:15:13.464 --rc geninfo_all_blocks=1 00:15:13.464 --rc geninfo_unexecuted_blocks=1 00:15:13.464 00:15:13.464 ' 00:15:13.464 22:15:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:13.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.464 --rc genhtml_branch_coverage=1 00:15:13.464 --rc genhtml_function_coverage=1 00:15:13.464 --rc genhtml_legend=1 00:15:13.464 --rc geninfo_all_blocks=1 00:15:13.464 --rc geninfo_unexecuted_blocks=1 00:15:13.464 00:15:13.464 ' 00:15:13.464 22:15:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:13.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.464 --rc genhtml_branch_coverage=1 00:15:13.464 --rc genhtml_function_coverage=1 00:15:13.464 --rc genhtml_legend=1 00:15:13.464 --rc geninfo_all_blocks=1 00:15:13.464 --rc geninfo_unexecuted_blocks=1 00:15:13.464 00:15:13.464 ' 00:15:13.464 22:15:10 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:13.464 22:15:10 -- nvmf/common.sh@7 -- # uname -s 00:15:13.464 22:15:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.464 22:15:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.464 22:15:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.465 22:15:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.465 22:15:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.465 22:15:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.465 22:15:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.465 22:15:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.465 22:15:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.465 22:15:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.465 22:15:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:15:13.465 22:15:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:15:13.465 22:15:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.465 22:15:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.465 22:15:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:13.465 22:15:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:13.465 22:15:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.465 22:15:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.465 22:15:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.465 22:15:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.465 22:15:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.465 22:15:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.465 22:15:10 -- paths/export.sh@5 -- # export PATH 00:15:13.465 22:15:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.465 22:15:10 -- nvmf/common.sh@46 -- # : 0 00:15:13.465 22:15:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:13.465 22:15:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:13.465 22:15:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:13.465 22:15:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.465 22:15:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.465 22:15:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:13.465 22:15:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:13.465 22:15:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:13.465 22:15:10 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:13.465 22:15:10 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:13.465 22:15:10 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.465 22:15:10 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:13.465 22:15:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:13.465 22:15:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.465 22:15:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:13.465 22:15:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:13.465 22:15:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:13.465 22:15:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.465 22:15:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.465 22:15:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.465 22:15:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:13.465 22:15:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:13.465 22:15:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:13.465 22:15:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:13.465 22:15:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:13.465 22:15:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:13.465 22:15:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.465 22:15:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.465 22:15:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:13.465 22:15:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:13.465 22:15:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:13.465 22:15:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:13.465 22:15:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:13.465 22:15:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.465 22:15:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:13.465 22:15:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:13.465 22:15:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:13.465 22:15:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:13.465 22:15:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:13.724 22:15:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:13.724 Cannot find device "nvmf_tgt_br" 00:15:13.724 22:15:10 -- nvmf/common.sh@154 -- # true 00:15:13.724 22:15:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:13.724 Cannot find device "nvmf_tgt_br2" 00:15:13.724 22:15:10 -- nvmf/common.sh@155 -- # true 00:15:13.724 22:15:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:13.724 22:15:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:13.724 Cannot find device "nvmf_tgt_br" 00:15:13.724 22:15:10 -- nvmf/common.sh@157 -- # true 00:15:13.724 22:15:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:13.724 Cannot find device "nvmf_tgt_br2" 00:15:13.724 22:15:10 -- nvmf/common.sh@158 -- # true 00:15:13.724 22:15:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:13.724 22:15:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:13.724 22:15:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:13.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.724 22:15:10 -- nvmf/common.sh@161 -- # true 00:15:13.724 22:15:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:13.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.724 22:15:10 -- nvmf/common.sh@162 -- # true 00:15:13.724 22:15:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:13.724 22:15:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:13.724 22:15:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:13.724 22:15:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:13.724 22:15:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:13.724 22:15:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:13.724 22:15:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:13.724 22:15:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:13.724 22:15:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:13.724 22:15:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:13.724 22:15:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:13.724 22:15:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:13.724 22:15:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:13.724 22:15:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:13.724 22:15:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:13.724 22:15:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:13.724 22:15:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:13.724 22:15:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:13.724 22:15:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:13.724 22:15:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:13.984 22:15:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:13.984 22:15:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:13.984 22:15:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:13.984 22:15:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:13.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:13.984 00:15:13.984 --- 10.0.0.2 ping statistics --- 00:15:13.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.984 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:13.984 22:15:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:13.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:13.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:13.984 00:15:13.984 --- 10.0.0.3 ping statistics --- 00:15:13.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.984 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:13.984 22:15:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:13.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:15:13.984 00:15:13.984 --- 10.0.0.1 ping statistics --- 00:15:13.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.984 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:13.984 22:15:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.984 22:15:10 -- nvmf/common.sh@421 -- # return 0 00:15:13.984 22:15:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:13.984 22:15:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.984 22:15:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:13.984 22:15:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:13.984 22:15:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.984 22:15:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:13.984 22:15:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:13.984 22:15:10 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:13.984 22:15:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:13.984 22:15:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:13.984 22:15:10 -- common/autotest_common.sh@10 -- # set +x 00:15:13.984 22:15:10 -- nvmf/common.sh@469 -- # nvmfpid=74524 00:15:13.984 22:15:10 -- nvmf/common.sh@470 -- # waitforlisten 74524 00:15:13.984 22:15:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.984 22:15:10 -- common/autotest_common.sh@829 -- # '[' -z 74524 ']' 00:15:13.984 22:15:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.984 22:15:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.984 22:15:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.984 22:15:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.984 22:15:10 -- common/autotest_common.sh@10 -- # set +x 00:15:13.984 [2024-11-17 22:15:10.456804] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:13.984 [2024-11-17 22:15:10.456868] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.984 [2024-11-17 22:15:10.584140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.243 [2024-11-17 22:15:10.686030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:14.243 [2024-11-17 22:15:10.686170] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.243 [2024-11-17 22:15:10.686182] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.243 [2024-11-17 22:15:10.686190] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.243 [2024-11-17 22:15:10.686224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.810 22:15:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.810 22:15:11 -- common/autotest_common.sh@862 -- # return 0 00:15:14.810 22:15:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:14.810 22:15:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.810 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:15:14.810 22:15:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.810 22:15:11 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.810 22:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.810 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:15:14.810 [2024-11-17 22:15:11.401544] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.810 22:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.810 22:15:11 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:14.810 22:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.810 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:15:15.069 Malloc0 00:15:15.069 22:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.069 22:15:11 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:15.069 22:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.069 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:15:15.069 22:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.069 22:15:11 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.069 22:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.069 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:15:15.069 22:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.069 22:15:11 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.069 22:15:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.069 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:15:15.069 [2024-11-17 22:15:11.470932] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.069 22:15:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.069 22:15:11 -- target/queue_depth.sh@30 -- # bdevperf_pid=74574 00:15:15.069 22:15:11 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:15.069 22:15:11 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:15.069 22:15:11 -- target/queue_depth.sh@33 -- # waitforlisten 74574 /var/tmp/bdevperf.sock 00:15:15.069 22:15:11 -- common/autotest_common.sh@829 -- # '[' -z 74574 ']' 00:15:15.069 22:15:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.069 22:15:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.069 22:15:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.069 22:15:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.069 22:15:11 -- common/autotest_common.sh@10 -- # set +x 00:15:15.069 [2024-11-17 22:15:11.534449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:15.069 [2024-11-17 22:15:11.534550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74574 ] 00:15:15.069 [2024-11-17 22:15:11.677112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.329 [2024-11-17 22:15:11.784495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.897 22:15:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.897 22:15:12 -- common/autotest_common.sh@862 -- # return 0 00:15:15.897 22:15:12 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:15.897 22:15:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.897 22:15:12 -- common/autotest_common.sh@10 -- # set +x 00:15:16.156 NVMe0n1 00:15:16.156 22:15:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.156 22:15:12 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:16.156 Running I/O for 10 seconds... 00:15:26.132 00:15:26.132 Latency(us) 00:15:26.132 [2024-11-17T22:15:22.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.132 [2024-11-17T22:15:22.747Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:26.132 Verification LBA range: start 0x0 length 0x4000 00:15:26.132 NVMe0n1 : 10.05 17151.64 67.00 0.00 0.00 59511.85 11558.17 62437.93 00:15:26.132 [2024-11-17T22:15:22.747Z] =================================================================================================================== 00:15:26.132 [2024-11-17T22:15:22.747Z] Total : 17151.64 67.00 0.00 0.00 59511.85 11558.17 62437.93 00:15:26.132 0 00:15:26.132 22:15:22 -- target/queue_depth.sh@39 -- # killprocess 74574 00:15:26.132 22:15:22 -- common/autotest_common.sh@936 -- # '[' -z 74574 ']' 00:15:26.132 22:15:22 -- common/autotest_common.sh@940 -- # kill -0 74574 00:15:26.132 22:15:22 -- common/autotest_common.sh@941 -- # uname 00:15:26.132 22:15:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.391 22:15:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74574 00:15:26.391 killing process with pid 74574 00:15:26.391 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.391 00:15:26.391 Latency(us) 00:15:26.391 [2024-11-17T22:15:23.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.391 [2024-11-17T22:15:23.006Z] =================================================================================================================== 00:15:26.391 [2024-11-17T22:15:23.006Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.391 22:15:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:26.391 22:15:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:26.391 22:15:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74574' 00:15:26.391 22:15:22 -- common/autotest_common.sh@955 -- # kill 74574 00:15:26.391 22:15:22 -- common/autotest_common.sh@960 -- # wait 74574 00:15:26.391 22:15:22 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:26.391 22:15:22 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:26.391 22:15:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:26.391 22:15:22 -- nvmf/common.sh@116 -- # sync 00:15:26.651 22:15:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:26.651 22:15:23 -- nvmf/common.sh@119 -- # set +e 00:15:26.651 22:15:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:26.651 22:15:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:26.651 rmmod nvme_tcp 00:15:26.651 rmmod nvme_fabrics 00:15:26.651 rmmod nvme_keyring 00:15:26.651 22:15:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:26.651 22:15:23 -- nvmf/common.sh@123 -- # set -e 00:15:26.651 22:15:23 -- nvmf/common.sh@124 -- # return 0 00:15:26.651 22:15:23 -- nvmf/common.sh@477 -- # '[' -n 74524 ']' 00:15:26.651 22:15:23 -- nvmf/common.sh@478 -- # killprocess 74524 00:15:26.651 22:15:23 -- common/autotest_common.sh@936 -- # '[' -z 74524 ']' 00:15:26.651 22:15:23 -- common/autotest_common.sh@940 -- # kill -0 74524 00:15:26.651 22:15:23 -- common/autotest_common.sh@941 -- # uname 00:15:26.651 22:15:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.651 22:15:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74524 00:15:26.651 killing process with pid 74524 00:15:26.651 22:15:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:26.651 22:15:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:26.651 22:15:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74524' 00:15:26.651 22:15:23 -- common/autotest_common.sh@955 -- # kill 74524 00:15:26.651 22:15:23 -- common/autotest_common.sh@960 -- # wait 74524 00:15:26.910 22:15:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:26.910 22:15:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:26.910 22:15:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:26.910 22:15:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.911 22:15:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:26.911 22:15:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.911 22:15:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.911 22:15:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.911 22:15:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:26.911 00:15:26.911 real 0m13.628s 00:15:26.911 user 0m22.381s 00:15:26.911 sys 0m2.677s 00:15:26.911 22:15:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:26.911 ************************************ 00:15:26.911 END TEST nvmf_queue_depth 00:15:26.911 ************************************ 00:15:26.911 22:15:23 -- common/autotest_common.sh@10 -- # set +x 00:15:27.170 22:15:23 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:27.170 22:15:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:27.170 22:15:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:27.170 22:15:23 -- common/autotest_common.sh@10 -- # set +x 00:15:27.170 ************************************ 00:15:27.170 START TEST nvmf_multipath 00:15:27.170 ************************************ 00:15:27.170 22:15:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:27.170 * Looking for test storage... 00:15:27.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:27.170 22:15:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:27.170 22:15:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:27.170 22:15:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:27.170 22:15:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:27.170 22:15:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:27.170 22:15:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:27.170 22:15:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:27.170 22:15:23 -- scripts/common.sh@335 -- # IFS=.-: 00:15:27.170 22:15:23 -- scripts/common.sh@335 -- # read -ra ver1 00:15:27.170 22:15:23 -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.171 22:15:23 -- scripts/common.sh@336 -- # read -ra ver2 00:15:27.171 22:15:23 -- scripts/common.sh@337 -- # local 'op=<' 00:15:27.171 22:15:23 -- scripts/common.sh@339 -- # ver1_l=2 00:15:27.171 22:15:23 -- scripts/common.sh@340 -- # ver2_l=1 00:15:27.171 22:15:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:27.171 22:15:23 -- scripts/common.sh@343 -- # case "$op" in 00:15:27.171 22:15:23 -- scripts/common.sh@344 -- # : 1 00:15:27.171 22:15:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:27.171 22:15:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.171 22:15:23 -- scripts/common.sh@364 -- # decimal 1 00:15:27.171 22:15:23 -- scripts/common.sh@352 -- # local d=1 00:15:27.171 22:15:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.171 22:15:23 -- scripts/common.sh@354 -- # echo 1 00:15:27.171 22:15:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:27.171 22:15:23 -- scripts/common.sh@365 -- # decimal 2 00:15:27.171 22:15:23 -- scripts/common.sh@352 -- # local d=2 00:15:27.171 22:15:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.171 22:15:23 -- scripts/common.sh@354 -- # echo 2 00:15:27.171 22:15:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:27.171 22:15:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:27.171 22:15:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:27.171 22:15:23 -- scripts/common.sh@367 -- # return 0 00:15:27.171 22:15:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.171 22:15:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:27.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.171 --rc genhtml_branch_coverage=1 00:15:27.171 --rc genhtml_function_coverage=1 00:15:27.171 --rc genhtml_legend=1 00:15:27.171 --rc geninfo_all_blocks=1 00:15:27.171 --rc geninfo_unexecuted_blocks=1 00:15:27.171 00:15:27.171 ' 00:15:27.171 22:15:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:27.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.171 --rc genhtml_branch_coverage=1 00:15:27.171 --rc genhtml_function_coverage=1 00:15:27.171 --rc genhtml_legend=1 00:15:27.171 --rc geninfo_all_blocks=1 00:15:27.171 --rc geninfo_unexecuted_blocks=1 00:15:27.171 00:15:27.171 ' 00:15:27.171 22:15:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:27.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.171 --rc genhtml_branch_coverage=1 00:15:27.171 --rc genhtml_function_coverage=1 00:15:27.171 --rc genhtml_legend=1 00:15:27.171 --rc geninfo_all_blocks=1 00:15:27.171 --rc geninfo_unexecuted_blocks=1 00:15:27.171 00:15:27.171 ' 00:15:27.171 22:15:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:27.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.171 --rc genhtml_branch_coverage=1 00:15:27.171 --rc genhtml_function_coverage=1 00:15:27.171 --rc genhtml_legend=1 00:15:27.171 --rc geninfo_all_blocks=1 00:15:27.171 --rc geninfo_unexecuted_blocks=1 00:15:27.171 00:15:27.171 ' 00:15:27.171 22:15:23 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.171 22:15:23 -- nvmf/common.sh@7 -- # uname -s 00:15:27.171 22:15:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.171 22:15:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.171 22:15:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.171 22:15:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.171 22:15:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.171 22:15:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.171 22:15:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.171 22:15:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.171 22:15:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.171 22:15:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.171 22:15:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:15:27.171 22:15:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:15:27.171 22:15:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.171 22:15:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.171 22:15:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.171 22:15:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.171 22:15:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.171 22:15:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.171 22:15:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.171 22:15:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.171 22:15:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.171 22:15:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.171 22:15:23 -- paths/export.sh@5 -- # export PATH 00:15:27.171 22:15:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.171 22:15:23 -- nvmf/common.sh@46 -- # : 0 00:15:27.171 22:15:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:27.171 22:15:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:27.171 22:15:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:27.171 22:15:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.171 22:15:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.171 22:15:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:27.171 22:15:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:27.171 22:15:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:27.171 22:15:23 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.171 22:15:23 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.171 22:15:23 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:27.171 22:15:23 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:27.171 22:15:23 -- target/multipath.sh@43 -- # nvmftestinit 00:15:27.171 22:15:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:27.171 22:15:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.171 22:15:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:27.171 22:15:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:27.171 22:15:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:27.171 22:15:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.171 22:15:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.171 22:15:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.171 22:15:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:27.171 22:15:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:27.171 22:15:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:27.171 22:15:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:27.171 22:15:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:27.171 22:15:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:27.171 22:15:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.171 22:15:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:27.171 22:15:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:27.171 22:15:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:27.171 22:15:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.171 22:15:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.171 22:15:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.171 22:15:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.171 22:15:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.171 22:15:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.171 22:15:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.171 22:15:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.171 22:15:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:27.171 22:15:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:27.430 Cannot find device "nvmf_tgt_br" 00:15:27.430 22:15:23 -- nvmf/common.sh@154 -- # true 00:15:27.430 22:15:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.430 Cannot find device "nvmf_tgt_br2" 00:15:27.430 22:15:23 -- nvmf/common.sh@155 -- # true 00:15:27.430 22:15:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:27.430 22:15:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:27.430 Cannot find device "nvmf_tgt_br" 00:15:27.430 22:15:23 -- nvmf/common.sh@157 -- # true 00:15:27.430 22:15:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:27.430 Cannot find device "nvmf_tgt_br2" 00:15:27.430 22:15:23 -- nvmf/common.sh@158 -- # true 00:15:27.430 22:15:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:27.430 22:15:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:27.431 22:15:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.431 22:15:23 -- nvmf/common.sh@161 -- # true 00:15:27.431 22:15:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.431 22:15:23 -- nvmf/common.sh@162 -- # true 00:15:27.431 22:15:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.431 22:15:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.431 22:15:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.431 22:15:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.431 22:15:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.431 22:15:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.431 22:15:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.431 22:15:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:27.431 22:15:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:27.431 22:15:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:27.431 22:15:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:27.431 22:15:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:27.431 22:15:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:27.431 22:15:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.431 22:15:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.431 22:15:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.431 22:15:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:27.431 22:15:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:27.431 22:15:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.431 22:15:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.431 22:15:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.431 22:15:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.431 22:15:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.431 22:15:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:27.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:15:27.431 00:15:27.431 --- 10.0.0.2 ping statistics --- 00:15:27.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.431 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:27.431 22:15:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:27.431 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.431 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:15:27.431 00:15:27.431 --- 10.0.0.3 ping statistics --- 00:15:27.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.431 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:27.690 22:15:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:27.690 00:15:27.690 --- 10.0.0.1 ping statistics --- 00:15:27.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.690 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:27.690 22:15:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.690 22:15:24 -- nvmf/common.sh@421 -- # return 0 00:15:27.690 22:15:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:27.690 22:15:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.690 22:15:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:27.690 22:15:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:27.690 22:15:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.690 22:15:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:27.690 22:15:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:27.690 22:15:24 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:27.690 22:15:24 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:27.690 22:15:24 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:27.690 22:15:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:27.690 22:15:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.690 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:15:27.690 22:15:24 -- nvmf/common.sh@469 -- # nvmfpid=74911 00:15:27.690 22:15:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:27.690 22:15:24 -- nvmf/common.sh@470 -- # waitforlisten 74911 00:15:27.690 22:15:24 -- common/autotest_common.sh@829 -- # '[' -z 74911 ']' 00:15:27.690 22:15:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.690 22:15:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.690 22:15:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.690 22:15:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.690 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:15:27.690 [2024-11-17 22:15:24.134688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:27.690 [2024-11-17 22:15:24.134794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.690 [2024-11-17 22:15:24.274195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:27.949 [2024-11-17 22:15:24.357160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:27.949 [2024-11-17 22:15:24.357309] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.949 [2024-11-17 22:15:24.357323] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.949 [2024-11-17 22:15:24.357331] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.949 [2024-11-17 22:15:24.357526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.949 [2024-11-17 22:15:24.357741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.949 [2024-11-17 22:15:24.357893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.949 [2024-11-17 22:15:24.357904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.517 22:15:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.517 22:15:25 -- common/autotest_common.sh@862 -- # return 0 00:15:28.517 22:15:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:28.517 22:15:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:28.517 22:15:25 -- common/autotest_common.sh@10 -- # set +x 00:15:28.517 22:15:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.517 22:15:25 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:28.775 [2024-11-17 22:15:25.299411] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.775 22:15:25 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:29.033 Malloc0 00:15:29.033 22:15:25 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:29.291 22:15:25 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:29.551 22:15:26 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.810 [2024-11-17 22:15:26.191427] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.810 22:15:26 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:29.810 [2024-11-17 22:15:26.403590] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.810 22:15:26 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:30.069 22:15:26 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:30.328 22:15:26 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:30.328 22:15:26 -- common/autotest_common.sh@1187 -- # local i=0 00:15:30.328 22:15:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:30.328 22:15:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:30.328 22:15:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:32.860 22:15:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:32.860 22:15:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:32.860 22:15:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.860 22:15:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:32.860 22:15:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.860 22:15:28 -- common/autotest_common.sh@1197 -- # return 0 00:15:32.860 22:15:28 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:32.860 22:15:28 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:32.860 22:15:28 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:32.860 22:15:28 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:32.860 22:15:28 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:32.860 22:15:28 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:32.860 22:15:28 -- target/multipath.sh@38 -- # return 0 00:15:32.860 22:15:28 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:32.860 22:15:28 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:32.860 22:15:28 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:32.860 22:15:28 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:32.860 22:15:28 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:32.860 22:15:28 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:32.860 22:15:28 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:32.860 22:15:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:32.860 22:15:28 -- target/multipath.sh@22 -- # local timeout=20 00:15:32.860 22:15:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:32.860 22:15:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:32.860 22:15:28 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:32.860 22:15:28 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:32.860 22:15:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:32.860 22:15:28 -- target/multipath.sh@22 -- # local timeout=20 00:15:32.860 22:15:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:32.860 22:15:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:32.860 22:15:28 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:32.860 22:15:28 -- target/multipath.sh@85 -- # echo numa 00:15:32.860 22:15:28 -- target/multipath.sh@88 -- # fio_pid=75049 00:15:32.860 22:15:28 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:32.860 22:15:28 -- target/multipath.sh@90 -- # sleep 1 00:15:32.860 [global] 00:15:32.860 thread=1 00:15:32.860 invalidate=1 00:15:32.860 rw=randrw 00:15:32.860 time_based=1 00:15:32.860 runtime=6 00:15:32.860 ioengine=libaio 00:15:32.860 direct=1 00:15:32.860 bs=4096 00:15:32.860 iodepth=128 00:15:32.860 norandommap=0 00:15:32.860 numjobs=1 00:15:32.860 00:15:32.860 verify_dump=1 00:15:32.860 verify_backlog=512 00:15:32.860 verify_state_save=0 00:15:32.860 do_verify=1 00:15:32.860 verify=crc32c-intel 00:15:32.860 [job0] 00:15:32.860 filename=/dev/nvme0n1 00:15:32.860 Could not set queue depth (nvme0n1) 00:15:32.860 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.860 fio-3.35 00:15:32.860 Starting 1 thread 00:15:33.429 22:15:29 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:33.687 22:15:30 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:33.946 22:15:30 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:33.946 22:15:30 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:33.946 22:15:30 -- target/multipath.sh@22 -- # local timeout=20 00:15:33.946 22:15:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:33.946 22:15:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:33.946 22:15:30 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:33.946 22:15:30 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:33.946 22:15:30 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:33.946 22:15:30 -- target/multipath.sh@22 -- # local timeout=20 00:15:33.946 22:15:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:33.946 22:15:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:33.946 22:15:30 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:33.946 22:15:30 -- target/multipath.sh@25 -- # sleep 1s 00:15:34.882 22:15:31 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:34.882 22:15:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:34.882 22:15:31 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:34.882 22:15:31 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:35.140 22:15:31 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:35.400 22:15:31 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:35.400 22:15:31 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:35.400 22:15:31 -- target/multipath.sh@22 -- # local timeout=20 00:15:35.400 22:15:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:35.400 22:15:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:35.400 22:15:31 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:35.400 22:15:31 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:35.400 22:15:31 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:35.400 22:15:31 -- target/multipath.sh@22 -- # local timeout=20 00:15:35.400 22:15:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:35.400 22:15:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:35.400 22:15:31 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:35.400 22:15:31 -- target/multipath.sh@25 -- # sleep 1s 00:15:36.777 22:15:32 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:36.777 22:15:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:36.777 22:15:32 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:36.777 22:15:32 -- target/multipath.sh@104 -- # wait 75049 00:15:38.683 00:15:38.683 job0: (groupid=0, jobs=1): err= 0: pid=75075: Sun Nov 17 22:15:35 2024 00:15:38.683 read: IOPS=13.1k, BW=51.3MiB/s (53.8MB/s)(308MiB/6003msec) 00:15:38.683 slat (usec): min=4, max=5213, avg=43.14, stdev=191.95 00:15:38.683 clat (usec): min=662, max=13340, avg=6692.69, stdev=1021.31 00:15:38.683 lat (usec): min=675, max=13349, avg=6735.83, stdev=1028.23 00:15:38.683 clat percentiles (usec): 00:15:38.683 | 1.00th=[ 4228], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 5932], 00:15:38.683 | 30.00th=[ 6128], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 6915], 00:15:38.683 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7898], 95.00th=[ 8455], 00:15:38.683 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[11076], 99.95th=[11207], 00:15:38.683 | 99.99th=[12256] 00:15:38.683 bw ( KiB/s): min=11792, max=37296, per=53.53%, avg=28123.64, stdev=7070.85, samples=11 00:15:38.683 iops : min= 2948, max= 9324, avg=7030.91, stdev=1767.71, samples=11 00:15:38.683 write: IOPS=7739, BW=30.2MiB/s (31.7MB/s)(155MiB/5132msec); 0 zone resets 00:15:38.683 slat (usec): min=7, max=3387, avg=55.49, stdev=136.86 00:15:38.683 clat (usec): min=548, max=11035, avg=5879.67, stdev=852.18 00:15:38.683 lat (usec): min=575, max=11109, avg=5935.17, stdev=853.95 00:15:38.683 clat percentiles (usec): 00:15:38.683 | 1.00th=[ 3425], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 5342], 00:15:38.683 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6063], 00:15:38.683 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6718], 95.00th=[ 6980], 00:15:38.683 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[10159], 99.95th=[10814], 00:15:38.683 | 99.99th=[10945] 00:15:38.683 bw ( KiB/s): min=12280, max=37040, per=90.78%, avg=28103.27, stdev=6746.10, samples=11 00:15:38.683 iops : min= 3070, max= 9260, avg=7025.82, stdev=1686.53, samples=11 00:15:38.683 lat (usec) : 750=0.01%, 1000=0.01% 00:15:38.683 lat (msec) : 2=0.02%, 4=1.56%, 10=97.86%, 20=0.56% 00:15:38.683 cpu : usr=6.25%, sys=23.90%, ctx=7200, majf=0, minf=175 00:15:38.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:38.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:38.683 issued rwts: total=78843,39719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:38.683 00:15:38.683 Run status group 0 (all jobs): 00:15:38.683 READ: bw=51.3MiB/s (53.8MB/s), 51.3MiB/s-51.3MiB/s (53.8MB/s-53.8MB/s), io=308MiB (323MB), run=6003-6003msec 00:15:38.683 WRITE: bw=30.2MiB/s (31.7MB/s), 30.2MiB/s-30.2MiB/s (31.7MB/s-31.7MB/s), io=155MiB (163MB), run=5132-5132msec 00:15:38.683 00:15:38.683 Disk stats (read/write): 00:15:38.683 nvme0n1: ios=77888/38908, merge=0/0, ticks=485050/213478, in_queue=698528, util=98.63% 00:15:38.683 22:15:35 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:38.941 22:15:35 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:39.200 22:15:35 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:39.200 22:15:35 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:39.200 22:15:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.200 22:15:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:39.200 22:15:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:39.200 22:15:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:39.200 22:15:35 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:39.200 22:15:35 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:39.200 22:15:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.200 22:15:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:39.200 22:15:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:39.200 22:15:35 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:39.200 22:15:35 -- target/multipath.sh@25 -- # sleep 1s 00:15:40.576 22:15:36 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:40.576 22:15:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:40.576 22:15:36 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:40.576 22:15:36 -- target/multipath.sh@113 -- # echo round-robin 00:15:40.576 22:15:36 -- target/multipath.sh@116 -- # fio_pid=75209 00:15:40.576 22:15:36 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:40.576 22:15:36 -- target/multipath.sh@118 -- # sleep 1 00:15:40.576 [global] 00:15:40.576 thread=1 00:15:40.576 invalidate=1 00:15:40.576 rw=randrw 00:15:40.576 time_based=1 00:15:40.576 runtime=6 00:15:40.576 ioengine=libaio 00:15:40.576 direct=1 00:15:40.576 bs=4096 00:15:40.576 iodepth=128 00:15:40.576 norandommap=0 00:15:40.576 numjobs=1 00:15:40.576 00:15:40.576 verify_dump=1 00:15:40.576 verify_backlog=512 00:15:40.576 verify_state_save=0 00:15:40.576 do_verify=1 00:15:40.576 verify=crc32c-intel 00:15:40.576 [job0] 00:15:40.576 filename=/dev/nvme0n1 00:15:40.576 Could not set queue depth (nvme0n1) 00:15:40.576 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:40.576 fio-3.35 00:15:40.576 Starting 1 thread 00:15:41.512 22:15:37 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:41.771 22:15:38 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:41.771 22:15:38 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:41.771 22:15:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:41.771 22:15:38 -- target/multipath.sh@22 -- # local timeout=20 00:15:41.771 22:15:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:41.771 22:15:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:41.771 22:15:38 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:41.771 22:15:38 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:41.771 22:15:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:41.771 22:15:38 -- target/multipath.sh@22 -- # local timeout=20 00:15:41.771 22:15:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:41.771 22:15:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:41.771 22:15:38 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:41.771 22:15:38 -- target/multipath.sh@25 -- # sleep 1s 00:15:43.149 22:15:39 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:43.149 22:15:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:43.149 22:15:39 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:43.149 22:15:39 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:43.149 22:15:39 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:43.412 22:15:39 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:43.412 22:15:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:43.412 22:15:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:43.412 22:15:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:43.412 22:15:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:43.412 22:15:39 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:43.412 22:15:39 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:43.412 22:15:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:43.412 22:15:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:43.412 22:15:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:43.412 22:15:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:43.412 22:15:39 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:43.412 22:15:39 -- target/multipath.sh@25 -- # sleep 1s 00:15:44.348 22:15:40 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:44.348 22:15:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:44.348 22:15:40 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:44.348 22:15:40 -- target/multipath.sh@132 -- # wait 75209 00:15:46.881 00:15:46.881 job0: (groupid=0, jobs=1): err= 0: pid=75234: Sun Nov 17 22:15:43 2024 00:15:46.881 read: IOPS=13.1k, BW=51.0MiB/s (53.5MB/s)(306MiB/6005msec) 00:15:46.881 slat (usec): min=4, max=5919, avg=39.68, stdev=198.20 00:15:46.881 clat (usec): min=245, max=21748, avg=6798.71, stdev=1492.27 00:15:46.881 lat (usec): min=257, max=23446, avg=6838.39, stdev=1501.70 00:15:46.881 clat percentiles (usec): 00:15:46.881 | 1.00th=[ 3326], 5.00th=[ 4752], 10.00th=[ 5538], 20.00th=[ 5932], 00:15:46.881 | 30.00th=[ 6128], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 6915], 00:15:46.881 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8455], 95.00th=[ 9372], 00:15:46.881 | 99.00th=[11338], 99.50th=[12911], 99.90th=[17957], 99.95th=[18220], 00:15:46.881 | 99.99th=[21627] 00:15:46.881 bw ( KiB/s): min=11960, max=34024, per=52.41%, avg=27386.91, stdev=6854.85, samples=11 00:15:46.881 iops : min= 2990, max= 8506, avg=6846.73, stdev=1713.71, samples=11 00:15:46.881 write: IOPS=7552, BW=29.5MiB/s (30.9MB/s)(156MiB/5281msec); 0 zone resets 00:15:46.881 slat (usec): min=9, max=1762, avg=47.29, stdev=132.81 00:15:46.881 clat (usec): min=261, max=20997, avg=5738.73, stdev=1214.93 00:15:46.881 lat (usec): min=294, max=21008, avg=5786.01, stdev=1221.88 00:15:46.881 clat percentiles (usec): 00:15:46.881 | 1.00th=[ 2737], 5.00th=[ 3720], 10.00th=[ 4293], 20.00th=[ 5014], 00:15:46.881 | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[ 5800], 60.00th=[ 5997], 00:15:46.881 | 70.00th=[ 6194], 80.00th=[ 6390], 90.00th=[ 6849], 95.00th=[ 7570], 00:15:46.881 | 99.00th=[ 9503], 99.50th=[10028], 99.90th=[12780], 99.95th=[14877], 00:15:46.881 | 99.99th=[19530] 00:15:46.881 bw ( KiB/s): min=12496, max=33880, per=90.53%, avg=27350.55, stdev=6646.40, samples=11 00:15:46.881 iops : min= 3124, max= 8470, avg=6837.64, stdev=1661.60, samples=11 00:15:46.881 lat (usec) : 250=0.01%, 500=0.03%, 750=0.06%, 1000=0.11% 00:15:46.881 lat (msec) : 2=0.28%, 4=3.42%, 10=94.17%, 20=1.91%, 50=0.01% 00:15:46.881 cpu : usr=4.96%, sys=18.89%, ctx=7121, majf=0, minf=102 00:15:46.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:46.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.881 issued rwts: total=78452,39887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.881 00:15:46.881 Run status group 0 (all jobs): 00:15:46.881 READ: bw=51.0MiB/s (53.5MB/s), 51.0MiB/s-51.0MiB/s (53.5MB/s-53.5MB/s), io=306MiB (321MB), run=6005-6005msec 00:15:46.881 WRITE: bw=29.5MiB/s (30.9MB/s), 29.5MiB/s-29.5MiB/s (30.9MB/s-30.9MB/s), io=156MiB (163MB), run=5281-5281msec 00:15:46.881 00:15:46.881 Disk stats (read/write): 00:15:46.881 nvme0n1: ios=77693/39262, merge=0/0, ticks=498450/211505, in_queue=709955, util=98.56% 00:15:46.881 22:15:43 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:46.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:46.881 22:15:43 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:46.881 22:15:43 -- common/autotest_common.sh@1208 -- # local i=0 00:15:46.881 22:15:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:46.881 22:15:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.881 22:15:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:46.881 22:15:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:46.881 22:15:43 -- common/autotest_common.sh@1220 -- # return 0 00:15:46.881 22:15:43 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.140 22:15:43 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:47.141 22:15:43 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:47.141 22:15:43 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:47.141 22:15:43 -- target/multipath.sh@144 -- # nvmftestfini 00:15:47.141 22:15:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:47.141 22:15:43 -- nvmf/common.sh@116 -- # sync 00:15:47.141 22:15:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:47.141 22:15:43 -- nvmf/common.sh@119 -- # set +e 00:15:47.141 22:15:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:47.141 22:15:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:47.141 rmmod nvme_tcp 00:15:47.141 rmmod nvme_fabrics 00:15:47.141 rmmod nvme_keyring 00:15:47.141 22:15:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:47.141 22:15:43 -- nvmf/common.sh@123 -- # set -e 00:15:47.141 22:15:43 -- nvmf/common.sh@124 -- # return 0 00:15:47.141 22:15:43 -- nvmf/common.sh@477 -- # '[' -n 74911 ']' 00:15:47.141 22:15:43 -- nvmf/common.sh@478 -- # killprocess 74911 00:15:47.141 22:15:43 -- common/autotest_common.sh@936 -- # '[' -z 74911 ']' 00:15:47.141 22:15:43 -- common/autotest_common.sh@940 -- # kill -0 74911 00:15:47.141 22:15:43 -- common/autotest_common.sh@941 -- # uname 00:15:47.141 22:15:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:47.141 22:15:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74911 00:15:47.402 killing process with pid 74911 00:15:47.402 22:15:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:47.402 22:15:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:47.402 22:15:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74911' 00:15:47.402 22:15:43 -- common/autotest_common.sh@955 -- # kill 74911 00:15:47.402 22:15:43 -- common/autotest_common.sh@960 -- # wait 74911 00:15:47.661 22:15:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:47.661 22:15:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:47.661 22:15:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:47.661 22:15:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.661 22:15:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:47.661 22:15:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.661 22:15:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.661 22:15:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.661 22:15:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:47.661 ************************************ 00:15:47.661 END TEST nvmf_multipath 00:15:47.661 ************************************ 00:15:47.661 00:15:47.661 real 0m20.524s 00:15:47.661 user 1m20.310s 00:15:47.661 sys 0m5.970s 00:15:47.661 22:15:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:47.661 22:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:47.661 22:15:44 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:47.661 22:15:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:47.661 22:15:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.661 22:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:47.661 ************************************ 00:15:47.661 START TEST nvmf_zcopy 00:15:47.661 ************************************ 00:15:47.661 22:15:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:47.661 * Looking for test storage... 00:15:47.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:47.661 22:15:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:47.661 22:15:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:47.661 22:15:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:47.920 22:15:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:47.920 22:15:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:47.920 22:15:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:47.920 22:15:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:47.920 22:15:44 -- scripts/common.sh@335 -- # IFS=.-: 00:15:47.920 22:15:44 -- scripts/common.sh@335 -- # read -ra ver1 00:15:47.920 22:15:44 -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.920 22:15:44 -- scripts/common.sh@336 -- # read -ra ver2 00:15:47.920 22:15:44 -- scripts/common.sh@337 -- # local 'op=<' 00:15:47.920 22:15:44 -- scripts/common.sh@339 -- # ver1_l=2 00:15:47.920 22:15:44 -- scripts/common.sh@340 -- # ver2_l=1 00:15:47.920 22:15:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:47.920 22:15:44 -- scripts/common.sh@343 -- # case "$op" in 00:15:47.920 22:15:44 -- scripts/common.sh@344 -- # : 1 00:15:47.920 22:15:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:47.920 22:15:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.920 22:15:44 -- scripts/common.sh@364 -- # decimal 1 00:15:47.920 22:15:44 -- scripts/common.sh@352 -- # local d=1 00:15:47.920 22:15:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.920 22:15:44 -- scripts/common.sh@354 -- # echo 1 00:15:47.920 22:15:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:47.920 22:15:44 -- scripts/common.sh@365 -- # decimal 2 00:15:47.920 22:15:44 -- scripts/common.sh@352 -- # local d=2 00:15:47.920 22:15:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.920 22:15:44 -- scripts/common.sh@354 -- # echo 2 00:15:47.920 22:15:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:47.920 22:15:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:47.920 22:15:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:47.920 22:15:44 -- scripts/common.sh@367 -- # return 0 00:15:47.920 22:15:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.921 22:15:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:47.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.921 --rc genhtml_branch_coverage=1 00:15:47.921 --rc genhtml_function_coverage=1 00:15:47.921 --rc genhtml_legend=1 00:15:47.921 --rc geninfo_all_blocks=1 00:15:47.921 --rc geninfo_unexecuted_blocks=1 00:15:47.921 00:15:47.921 ' 00:15:47.921 22:15:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:47.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.921 --rc genhtml_branch_coverage=1 00:15:47.921 --rc genhtml_function_coverage=1 00:15:47.921 --rc genhtml_legend=1 00:15:47.921 --rc geninfo_all_blocks=1 00:15:47.921 --rc geninfo_unexecuted_blocks=1 00:15:47.921 00:15:47.921 ' 00:15:47.921 22:15:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:47.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.921 --rc genhtml_branch_coverage=1 00:15:47.921 --rc genhtml_function_coverage=1 00:15:47.921 --rc genhtml_legend=1 00:15:47.921 --rc geninfo_all_blocks=1 00:15:47.921 --rc geninfo_unexecuted_blocks=1 00:15:47.921 00:15:47.921 ' 00:15:47.921 22:15:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:47.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.921 --rc genhtml_branch_coverage=1 00:15:47.921 --rc genhtml_function_coverage=1 00:15:47.921 --rc genhtml_legend=1 00:15:47.921 --rc geninfo_all_blocks=1 00:15:47.921 --rc geninfo_unexecuted_blocks=1 00:15:47.921 00:15:47.921 ' 00:15:47.921 22:15:44 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:47.921 22:15:44 -- nvmf/common.sh@7 -- # uname -s 00:15:47.921 22:15:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.921 22:15:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.921 22:15:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.921 22:15:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.921 22:15:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.921 22:15:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.921 22:15:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.921 22:15:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.921 22:15:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.921 22:15:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.921 22:15:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:15:47.921 22:15:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:15:47.921 22:15:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.921 22:15:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.921 22:15:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:47.921 22:15:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.921 22:15:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.921 22:15:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.921 22:15:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.921 22:15:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.921 22:15:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.921 22:15:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.921 22:15:44 -- paths/export.sh@5 -- # export PATH 00:15:47.921 22:15:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.921 22:15:44 -- nvmf/common.sh@46 -- # : 0 00:15:47.921 22:15:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:47.921 22:15:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:47.921 22:15:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:47.921 22:15:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.921 22:15:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.921 22:15:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:47.921 22:15:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:47.921 22:15:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:47.921 22:15:44 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:47.921 22:15:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:47.921 22:15:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.921 22:15:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:47.921 22:15:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:47.921 22:15:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:47.921 22:15:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.921 22:15:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.921 22:15:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.921 22:15:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:47.921 22:15:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:47.921 22:15:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:47.921 22:15:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:47.921 22:15:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:47.921 22:15:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:47.921 22:15:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.921 22:15:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.921 22:15:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:47.921 22:15:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:47.921 22:15:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:47.921 22:15:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:47.921 22:15:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:47.921 22:15:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.921 22:15:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:47.921 22:15:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:47.921 22:15:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:47.921 22:15:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:47.921 22:15:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:47.921 22:15:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:47.921 Cannot find device "nvmf_tgt_br" 00:15:47.921 22:15:44 -- nvmf/common.sh@154 -- # true 00:15:47.921 22:15:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.921 Cannot find device "nvmf_tgt_br2" 00:15:47.921 22:15:44 -- nvmf/common.sh@155 -- # true 00:15:47.921 22:15:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:47.921 22:15:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:47.921 Cannot find device "nvmf_tgt_br" 00:15:47.921 22:15:44 -- nvmf/common.sh@157 -- # true 00:15:47.921 22:15:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:47.921 Cannot find device "nvmf_tgt_br2" 00:15:47.921 22:15:44 -- nvmf/common.sh@158 -- # true 00:15:47.921 22:15:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:47.921 22:15:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:47.921 22:15:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.921 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.921 22:15:44 -- nvmf/common.sh@161 -- # true 00:15:47.921 22:15:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.921 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.921 22:15:44 -- nvmf/common.sh@162 -- # true 00:15:47.921 22:15:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:47.921 22:15:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:47.921 22:15:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:47.921 22:15:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:47.921 22:15:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:47.921 22:15:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:47.921 22:15:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.180 22:15:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.180 22:15:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:48.180 22:15:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:48.180 22:15:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:48.180 22:15:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:48.180 22:15:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:48.180 22:15:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.180 22:15:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.180 22:15:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.180 22:15:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:48.180 22:15:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:48.180 22:15:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.180 22:15:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.180 22:15:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.180 22:15:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.180 22:15:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.180 22:15:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:48.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:48.180 00:15:48.180 --- 10.0.0.2 ping statistics --- 00:15:48.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.180 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:48.180 22:15:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:48.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:48.180 00:15:48.181 --- 10.0.0.3 ping statistics --- 00:15:48.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.181 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:48.181 22:15:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:15:48.181 00:15:48.181 --- 10.0.0.1 ping statistics --- 00:15:48.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.181 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:48.181 22:15:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.181 22:15:44 -- nvmf/common.sh@421 -- # return 0 00:15:48.181 22:15:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:48.181 22:15:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.181 22:15:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:48.181 22:15:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:48.181 22:15:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.181 22:15:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:48.181 22:15:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:48.181 22:15:44 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:48.181 22:15:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:48.181 22:15:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:48.181 22:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:48.181 22:15:44 -- nvmf/common.sh@469 -- # nvmfpid=75514 00:15:48.181 22:15:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:48.181 22:15:44 -- nvmf/common.sh@470 -- # waitforlisten 75514 00:15:48.181 22:15:44 -- common/autotest_common.sh@829 -- # '[' -z 75514 ']' 00:15:48.181 22:15:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.181 22:15:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.181 22:15:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.181 22:15:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.181 22:15:44 -- common/autotest_common.sh@10 -- # set +x 00:15:48.181 [2024-11-17 22:15:44.748143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:48.181 [2024-11-17 22:15:44.748658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.440 [2024-11-17 22:15:44.891015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.440 [2024-11-17 22:15:45.023366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:48.440 [2024-11-17 22:15:45.023550] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.440 [2024-11-17 22:15:45.023568] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.440 [2024-11-17 22:15:45.023581] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.440 [2024-11-17 22:15:45.023624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.376 22:15:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.376 22:15:45 -- common/autotest_common.sh@862 -- # return 0 00:15:49.376 22:15:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:49.376 22:15:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.376 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.376 22:15:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.376 22:15:45 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:49.376 22:15:45 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:49.376 22:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.376 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.376 [2024-11-17 22:15:45.804412] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.376 22:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.376 22:15:45 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:49.376 22:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.376 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.376 22:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.376 22:15:45 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.376 22:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.376 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.377 [2024-11-17 22:15:45.820544] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.377 22:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.377 22:15:45 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:49.377 22:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.377 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.377 22:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.377 22:15:45 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:49.377 22:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.377 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.377 malloc0 00:15:49.377 22:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.377 22:15:45 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:49.377 22:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.377 22:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.377 22:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.377 22:15:45 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:49.377 22:15:45 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:49.377 22:15:45 -- nvmf/common.sh@520 -- # config=() 00:15:49.377 22:15:45 -- nvmf/common.sh@520 -- # local subsystem config 00:15:49.377 22:15:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:49.377 22:15:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:49.377 { 00:15:49.377 "params": { 00:15:49.377 "name": "Nvme$subsystem", 00:15:49.377 "trtype": "$TEST_TRANSPORT", 00:15:49.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:49.377 "adrfam": "ipv4", 00:15:49.377 "trsvcid": "$NVMF_PORT", 00:15:49.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:49.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:49.377 "hdgst": ${hdgst:-false}, 00:15:49.377 "ddgst": ${ddgst:-false} 00:15:49.377 }, 00:15:49.377 "method": "bdev_nvme_attach_controller" 00:15:49.377 } 00:15:49.377 EOF 00:15:49.377 )") 00:15:49.377 22:15:45 -- nvmf/common.sh@542 -- # cat 00:15:49.377 22:15:45 -- nvmf/common.sh@544 -- # jq . 00:15:49.377 22:15:45 -- nvmf/common.sh@545 -- # IFS=, 00:15:49.377 22:15:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:49.377 "params": { 00:15:49.377 "name": "Nvme1", 00:15:49.377 "trtype": "tcp", 00:15:49.377 "traddr": "10.0.0.2", 00:15:49.377 "adrfam": "ipv4", 00:15:49.377 "trsvcid": "4420", 00:15:49.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:49.377 "hdgst": false, 00:15:49.377 "ddgst": false 00:15:49.377 }, 00:15:49.377 "method": "bdev_nvme_attach_controller" 00:15:49.377 }' 00:15:49.377 [2024-11-17 22:15:45.908690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:49.377 [2024-11-17 22:15:45.908787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75565 ] 00:15:49.636 [2024-11-17 22:15:46.042812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.636 [2024-11-17 22:15:46.134865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.894 Running I/O for 10 seconds... 00:15:59.946 00:15:59.946 Latency(us) 00:15:59.946 [2024-11-17T22:15:56.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.946 [2024-11-17T22:15:56.561Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:59.946 Verification LBA range: start 0x0 length 0x1000 00:15:59.946 Nvme1n1 : 10.01 11037.07 86.23 0.00 0.00 11569.49 960.70 19660.80 00:15:59.946 [2024-11-17T22:15:56.561Z] =================================================================================================================== 00:15:59.946 [2024-11-17T22:15:56.562Z] Total : 11037.07 86.23 0.00 0.00 11569.49 960.70 19660.80 00:16:00.252 22:15:56 -- target/zcopy.sh@39 -- # perfpid=75687 00:16:00.252 22:15:56 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:00.252 22:15:56 -- common/autotest_common.sh@10 -- # set +x 00:16:00.252 22:15:56 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:00.252 22:15:56 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:00.252 22:15:56 -- nvmf/common.sh@520 -- # config=() 00:16:00.252 22:15:56 -- nvmf/common.sh@520 -- # local subsystem config 00:16:00.252 22:15:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:00.252 22:15:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:00.252 { 00:16:00.252 "params": { 00:16:00.252 "name": "Nvme$subsystem", 00:16:00.252 "trtype": "$TEST_TRANSPORT", 00:16:00.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:00.252 "adrfam": "ipv4", 00:16:00.252 "trsvcid": "$NVMF_PORT", 00:16:00.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:00.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:00.252 "hdgst": ${hdgst:-false}, 00:16:00.252 "ddgst": ${ddgst:-false} 00:16:00.252 }, 00:16:00.252 "method": "bdev_nvme_attach_controller" 00:16:00.252 } 00:16:00.252 EOF 00:16:00.252 )") 00:16:00.252 22:15:56 -- nvmf/common.sh@542 -- # cat 00:16:00.252 [2024-11-17 22:15:56.558858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.558900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 22:15:56 -- nvmf/common.sh@544 -- # jq . 00:16:00.252 22:15:56 -- nvmf/common.sh@545 -- # IFS=, 00:16:00.252 22:15:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:00.252 "params": { 00:16:00.252 "name": "Nvme1", 00:16:00.252 "trtype": "tcp", 00:16:00.252 "traddr": "10.0.0.2", 00:16:00.252 "adrfam": "ipv4", 00:16:00.252 "trsvcid": "4420", 00:16:00.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:00.252 "hdgst": false, 00:16:00.252 "ddgst": false 00:16:00.252 }, 00:16:00.252 "method": "bdev_nvme_attach_controller" 00:16:00.252 }' 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.570818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.570843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.582768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.582791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.594769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.594792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.606771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.606793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 [2024-11-17 22:15:56.608338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:00.252 [2024-11-17 22:15:56.608430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75687 ] 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.618774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.618797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.630777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.630800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.642779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.642803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.654780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.654811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.666788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.666812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.678786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.678810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.690787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.690811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.702791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.702815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.714793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.714815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.726796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.726819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.738816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.738838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.745530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.252 [2024-11-17 22:15:56.750840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.750861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.762822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.762843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.774829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.774851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.252 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.252 [2024-11-17 22:15:56.786832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.252 [2024-11-17 22:15:56.786854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.253 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.253 [2024-11-17 22:15:56.798850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.253 [2024-11-17 22:15:56.798875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.253 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.253 [2024-11-17 22:15:56.810857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.253 [2024-11-17 22:15:56.810890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.253 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.253 [2024-11-17 22:15:56.816685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.253 [2024-11-17 22:15:56.822868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.253 [2024-11-17 22:15:56.822894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.253 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.253 [2024-11-17 22:15:56.834881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.253 [2024-11-17 22:15:56.834907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.253 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.846882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.846916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.858880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.858905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.870902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.870927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.882893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.882925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.894890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.894914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.906894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.906918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.918924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.918954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.930909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.930949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.942914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.942941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.950911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.950938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.962919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.962946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:56.974931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.974960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 Running I/O for 5 seconds... 00:16:00.513 [2024-11-17 22:15:56.986934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:56.986958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:57.003235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:57.003266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:57.014358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:57.014390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:57.029946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:57.029979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:57.047145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:57.047176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:57.063216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:57.063247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:57.078920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:57.078950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:57.092998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:57.093029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.513 [2024-11-17 22:15:57.108382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.513 [2024-11-17 22:15:57.108414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.513 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.125596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.125628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.140892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.140923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.157541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.157572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.174257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.174287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.191031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.191061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.207925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.207955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.224266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.224296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.241384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.241413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.257714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.257755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.274157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.274219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.290391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.290422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.306674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.306704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.323329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.773 [2024-11-17 22:15:57.323361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.773 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.773 [2024-11-17 22:15:57.339250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.774 [2024-11-17 22:15:57.339280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.774 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.774 [2024-11-17 22:15:57.355002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.774 [2024-11-17 22:15:57.355032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.774 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.774 [2024-11-17 22:15:57.367933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.774 [2024-11-17 22:15:57.367964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.774 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:00.774 [2024-11-17 22:15:57.383850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.774 [2024-11-17 22:15:57.383883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.400218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.400248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.416937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.416968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.432855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.432885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.449494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.449525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.466210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.466241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.482742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.482781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.499462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.499495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.516186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.516218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.532927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.532958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.549209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.549240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.565759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.565788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.582527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.582558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.598691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.598720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.615004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.615034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.034 [2024-11-17 22:15:57.631357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.034 [2024-11-17 22:15:57.631388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.034 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.648638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.648669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.663799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.663826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.676525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.676556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.693011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.693043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.709047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.709078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.723946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.723977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.740606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.740637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.757368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.757398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.773687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.773718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.790242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.790273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.806354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.806385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.822913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.822943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.839243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.839274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.855725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.855766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.872641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.872671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.888360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.888390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.294 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.294 [2024-11-17 22:15:57.905979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.294 [2024-11-17 22:15:57.906011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:57.920919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:57.920949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:57.936704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:57.936745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:57.952316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:57.952346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:57.968669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:57.968699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:57.985061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:57.985091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.001318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.001350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.018339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.018370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.034794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.034824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.050793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.050833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.067391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.067421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.083985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.084017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.100157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.100187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.116021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.116051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.127035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.127067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.142007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.142041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.554 [2024-11-17 22:15:58.158288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.554 [2024-11-17 22:15:58.158319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.554 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.813 [2024-11-17 22:15:58.174908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.813 [2024-11-17 22:15:58.174938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.813 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.813 [2024-11-17 22:15:58.191793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.813 [2024-11-17 22:15:58.191823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.813 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.813 [2024-11-17 22:15:58.208137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.813 [2024-11-17 22:15:58.208168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.813 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.813 [2024-11-17 22:15:58.224701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.813 [2024-11-17 22:15:58.224731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.813 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.813 [2024-11-17 22:15:58.241320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.813 [2024-11-17 22:15:58.241350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.257564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.257594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.273819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.273878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.290843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.290872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.306661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.306691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.323191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.323221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.340034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.340065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.355836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.355866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.372187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.372217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.383692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.383722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.399910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.399940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:01.814 [2024-11-17 22:15:58.415489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.814 [2024-11-17 22:15:58.415519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.814 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.073 [2024-11-17 22:15:58.432529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.073 [2024-11-17 22:15:58.432560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.073 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.073 [2024-11-17 22:15:58.448545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.073 [2024-11-17 22:15:58.448576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.073 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.073 [2024-11-17 22:15:58.464919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.073 [2024-11-17 22:15:58.464949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.073 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.073 [2024-11-17 22:15:58.482171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.073 [2024-11-17 22:15:58.482233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.073 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.073 [2024-11-17 22:15:58.496442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.073 [2024-11-17 22:15:58.496473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.073 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.073 [2024-11-17 22:15:58.512788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.073 [2024-11-17 22:15:58.512817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.073 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.073 [2024-11-17 22:15:58.528043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.073 [2024-11-17 22:15:58.528075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.073 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.073 [2024-11-17 22:15:58.540014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.073 [2024-11-17 22:15:58.540044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.073 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.074 [2024-11-17 22:15:58.554938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.074 [2024-11-17 22:15:58.554968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.074 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.074 [2024-11-17 22:15:58.571266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.074 [2024-11-17 22:15:58.571296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.074 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.074 [2024-11-17 22:15:58.588056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.074 [2024-11-17 22:15:58.588086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.074 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.074 [2024-11-17 22:15:58.604871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.074 [2024-11-17 22:15:58.604902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.074 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.074 [2024-11-17 22:15:58.621296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.074 [2024-11-17 22:15:58.621327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.074 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.074 [2024-11-17 22:15:58.637953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.074 [2024-11-17 22:15:58.637987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.074 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.074 [2024-11-17 22:15:58.654631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.074 [2024-11-17 22:15:58.654661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.074 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.074 [2024-11-17 22:15:58.670738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.074 [2024-11-17 22:15:58.670781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.074 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.333 [2024-11-17 22:15:58.687308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.333 [2024-11-17 22:15:58.687338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.333 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.333 [2024-11-17 22:15:58.703948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.333 [2024-11-17 22:15:58.703978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.333 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.333 [2024-11-17 22:15:58.721247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.333 [2024-11-17 22:15:58.721278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.333 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.333 [2024-11-17 22:15:58.738050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.333 [2024-11-17 22:15:58.738084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.333 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.754443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.754473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.770501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.770532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.786611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.786642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.798326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.798354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.814217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.814248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.831218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.831246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.847088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.847118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.863965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.863996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.880479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.880510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.897206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.897236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.913235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.913265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.929380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.929411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.334 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.334 [2024-11-17 22:15:58.944335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.334 [2024-11-17 22:15:58.944366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.593 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.593 [2024-11-17 22:15:58.961351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.593 [2024-11-17 22:15:58.961381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.593 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.593 [2024-11-17 22:15:58.978030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.593 [2024-11-17 22:15:58.978065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.593 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.593 [2024-11-17 22:15:58.995033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.593 [2024-11-17 22:15:58.995063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.593 2024/11/17 22:15:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.010938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.010968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.021929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.021963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.037089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.037119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.047775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.047802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.062910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.062940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.079541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.079571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.096066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.096096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.111035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.111066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.125223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.125254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.140522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.140552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.156860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.156889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.172959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.172990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.189668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.189699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.594 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.594 [2024-11-17 22:15:59.206146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.594 [2024-11-17 22:15:59.206193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.223372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.223402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.239859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.239887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.256363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.256394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.273067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.273108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.289446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.289474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.305550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.305580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.321997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.322027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.338411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.338440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.354581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.354610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.370843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.370872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.383114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.383144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.394638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.394669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.409959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.409991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.426216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.426246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.442449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.442478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.853 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:02.853 [2024-11-17 22:15:59.458985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.853 [2024-11-17 22:15:59.459015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.854 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.112 [2024-11-17 22:15:59.475479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.475509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.492140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.492172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.508124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.508154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.524828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.524858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.541331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.541361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.557797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.557828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.574168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.574244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.590506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.590536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.606353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.606382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.622988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.623019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.639876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.639906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.655673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.655703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.672342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.672372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.688251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.688282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.699134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.699164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.113 [2024-11-17 22:15:59.715551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.113 [2024-11-17 22:15:59.715581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.113 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.730756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.730794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.741943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.741976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.759417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.759453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.773344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.773374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.788007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.788038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.799242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.799271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.815753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.815779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.831188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.831218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.843075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.843105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.858432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.858462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.875068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.875099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.892027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.892057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.908250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.908279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.924351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.924381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.940447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.940478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.952398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.952429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.373 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.373 [2024-11-17 22:15:59.967618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.373 [2024-11-17 22:15:59.967647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.374 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.374 [2024-11-17 22:15:59.985337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.374 [2024-11-17 22:15:59.985365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:15:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:15:59.999455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:15:59.999485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.013621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.013656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.031476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.031507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.045511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.045542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.060085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.060115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.076373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.076404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.092320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.092351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.108974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.109005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.125274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.125305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.141763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.141794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.158270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.158301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.174520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.174550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.189942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.189973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.206855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.206883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.223395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.223427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.634 [2024-11-17 22:16:00.240553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.634 [2024-11-17 22:16:00.240584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.634 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.256339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.256369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.271500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.271535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.288298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.288328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.304511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.304542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.320726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.320766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.337164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.337194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.353357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.353387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.370127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.370158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.386499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.386529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.402732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.402773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.418625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.418655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.430069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.430103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.446718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.446763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.460253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.460283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.476218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.476248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.894 [2024-11-17 22:16:00.491841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.894 [2024-11-17 22:16:00.491873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.894 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.509060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.509092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.526289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.526320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.542448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.542479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.558862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.558892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.574837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.574868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.591231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.591261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.607578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.607607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.623879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.623909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.640523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.640554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.656211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.656242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.667720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.667762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.682989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.683020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.700052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.700082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.716676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.716705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.733159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.733189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.749328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.749357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.153 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.153 [2024-11-17 22:16:00.766052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.153 [2024-11-17 22:16:00.766084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.412 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.781922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.781955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.798502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.798532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.814778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.814808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.831418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.831448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.847976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.848007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.864221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.864252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.881101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.881132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.896990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.897021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.913105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.913136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.929378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.929409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.941315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.941346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.956709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.956749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.973332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.973362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:00.990096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:00.990143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:01.006188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:01.006217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.413 [2024-11-17 22:16:01.017889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.413 [2024-11-17 22:16:01.017921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.413 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.671 [2024-11-17 22:16:01.033968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.671 [2024-11-17 22:16:01.034001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.671 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.671 [2024-11-17 22:16:01.050433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.671 [2024-11-17 22:16:01.050463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.671 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.671 [2024-11-17 22:16:01.066186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.671 [2024-11-17 22:16:01.066216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.671 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.671 [2024-11-17 22:16:01.082476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.671 [2024-11-17 22:16:01.082506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.671 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.671 [2024-11-17 22:16:01.094478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.671 [2024-11-17 22:16:01.094508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.671 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.671 [2024-11-17 22:16:01.109803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.671 [2024-11-17 22:16:01.109832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.671 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.671 [2024-11-17 22:16:01.126184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.671 [2024-11-17 22:16:01.126230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.671 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.671 [2024-11-17 22:16:01.142840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.671 [2024-11-17 22:16:01.142866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.671 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.671 [2024-11-17 22:16:01.159154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.672 [2024-11-17 22:16:01.159185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.672 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.672 [2024-11-17 22:16:01.175701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.672 [2024-11-17 22:16:01.175731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.672 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.672 [2024-11-17 22:16:01.191692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.672 [2024-11-17 22:16:01.191722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.672 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.672 [2024-11-17 22:16:01.202308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.672 [2024-11-17 22:16:01.202338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.672 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.672 [2024-11-17 22:16:01.217522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.672 [2024-11-17 22:16:01.217550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.672 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.672 [2024-11-17 22:16:01.234264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.672 [2024-11-17 22:16:01.234295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.672 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.672 [2024-11-17 22:16:01.250463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.672 [2024-11-17 22:16:01.250494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.672 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.672 [2024-11-17 22:16:01.267584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.672 [2024-11-17 22:16:01.267616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.672 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.672 [2024-11-17 22:16:01.283680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.672 [2024-11-17 22:16:01.283711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.930 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.930 [2024-11-17 22:16:01.299585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.930 [2024-11-17 22:16:01.299615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.930 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.930 [2024-11-17 22:16:01.316515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.930 [2024-11-17 22:16:01.316545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.930 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.930 [2024-11-17 22:16:01.333251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.333282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.349613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.349645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.365956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.365989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.382129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.382175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.399202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.399232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.415249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.415279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.432391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.432422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.449429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.449459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.465332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.465362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.481471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.481502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.498677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.498709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.514817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.514846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.931 [2024-11-17 22:16:01.531004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.931 [2024-11-17 22:16:01.531034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.931 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.548285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.548315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.564835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.564865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.581024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.581056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.597767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.597798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.613719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.613757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.630409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.630439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.646561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.646590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.662637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.662667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.674690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.674719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.689533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.689564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.706234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.706265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.722715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.722756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.738938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.738967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.190 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.190 [2024-11-17 22:16:01.755550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.190 [2024-11-17 22:16:01.755580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.191 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.191 [2024-11-17 22:16:01.772943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.191 [2024-11-17 22:16:01.772975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.191 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.191 [2024-11-17 22:16:01.789233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.191 [2024-11-17 22:16:01.789264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.191 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.807167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.807199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.823053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.823083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.839372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.839402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.856219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.856249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.872746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.872775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.889199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.889231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.905758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.905788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.922374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.922421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.937820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.937898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.953417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.953447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.970413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.970443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 [2024-11-17 22:16:01.986298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.450 [2024-11-17 22:16:01.986329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.450 2024/11/17 22:16:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.450 00:16:05.450 Latency(us) 00:16:05.450 [2024-11-17T22:16:02.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.450 [2024-11-17T22:16:02.065Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:05.450 Nvme1n1 : 5.01 14081.60 110.01 0.00 0.00 9079.75 3872.58 18826.71 00:16:05.451 [2024-11-17T22:16:02.066Z] =================================================================================================================== 00:16:05.451 [2024-11-17T22:16:02.066Z] Total : 14081.60 110.01 0.00 0.00 9079.75 3872.58 18826.71 00:16:05.451 [2024-11-17 22:16:01.997964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.451 [2024-11-17 22:16:01.997998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.451 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.451 [2024-11-17 22:16:02.009952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.451 [2024-11-17 22:16:02.009982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.451 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.451 [2024-11-17 22:16:02.021936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.451 [2024-11-17 22:16:02.021965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.451 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.451 [2024-11-17 22:16:02.033935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.451 [2024-11-17 22:16:02.033963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.451 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.451 [2024-11-17 22:16:02.045954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.451 [2024-11-17 22:16:02.045991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.451 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.451 [2024-11-17 22:16:02.057958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.451 [2024-11-17 22:16:02.057984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.451 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.710 [2024-11-17 22:16:02.069976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.710 [2024-11-17 22:16:02.070013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.710 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.710 [2024-11-17 22:16:02.081977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.710 [2024-11-17 22:16:02.082002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.710 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.710 [2024-11-17 22:16:02.093977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.710 [2024-11-17 22:16:02.094002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.710 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.710 [2024-11-17 22:16:02.105981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.710 [2024-11-17 22:16:02.106006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.710 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.710 [2024-11-17 22:16:02.117983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.710 [2024-11-17 22:16:02.118008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.710 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.710 [2024-11-17 22:16:02.129987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.710 [2024-11-17 22:16:02.130012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.710 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.710 [2024-11-17 22:16:02.141989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.710 [2024-11-17 22:16:02.142023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.711 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.711 [2024-11-17 22:16:02.153991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.711 [2024-11-17 22:16:02.154027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.711 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.711 [2024-11-17 22:16:02.165996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.711 [2024-11-17 22:16:02.166031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.711 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.711 [2024-11-17 22:16:02.178001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.711 [2024-11-17 22:16:02.178027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.711 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.711 [2024-11-17 22:16:02.190005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.711 [2024-11-17 22:16:02.190031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.711 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.711 [2024-11-17 22:16:02.202009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.711 [2024-11-17 22:16:02.202036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.711 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.711 [2024-11-17 22:16:02.214011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.711 [2024-11-17 22:16:02.214035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.711 2024/11/17 22:16:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.711 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75687) - No such process 00:16:05.711 22:16:02 -- target/zcopy.sh@49 -- # wait 75687 00:16:05.711 22:16:02 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.711 22:16:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.711 22:16:02 -- common/autotest_common.sh@10 -- # set +x 00:16:05.711 22:16:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.711 22:16:02 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:05.711 22:16:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.711 22:16:02 -- common/autotest_common.sh@10 -- # set +x 00:16:05.711 delay0 00:16:05.711 22:16:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.711 22:16:02 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:05.711 22:16:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.711 22:16:02 -- common/autotest_common.sh@10 -- # set +x 00:16:05.711 22:16:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.711 22:16:02 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:05.969 [2024-11-17 22:16:02.417554] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:12.532 Initializing NVMe Controllers 00:16:12.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:12.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:12.532 Initialization complete. Launching workers. 00:16:12.532 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 110 00:16:12.532 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 397, failed to submit 33 00:16:12.532 success 219, unsuccess 178, failed 0 00:16:12.532 22:16:08 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:12.532 22:16:08 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:12.532 22:16:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:12.532 22:16:08 -- nvmf/common.sh@116 -- # sync 00:16:12.532 22:16:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:12.532 22:16:08 -- nvmf/common.sh@119 -- # set +e 00:16:12.532 22:16:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:12.532 22:16:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:12.532 rmmod nvme_tcp 00:16:12.532 rmmod nvme_fabrics 00:16:12.532 rmmod nvme_keyring 00:16:12.532 22:16:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:12.532 22:16:08 -- nvmf/common.sh@123 -- # set -e 00:16:12.532 22:16:08 -- nvmf/common.sh@124 -- # return 0 00:16:12.532 22:16:08 -- nvmf/common.sh@477 -- # '[' -n 75514 ']' 00:16:12.532 22:16:08 -- nvmf/common.sh@478 -- # killprocess 75514 00:16:12.532 22:16:08 -- common/autotest_common.sh@936 -- # '[' -z 75514 ']' 00:16:12.532 22:16:08 -- common/autotest_common.sh@940 -- # kill -0 75514 00:16:12.532 22:16:08 -- common/autotest_common.sh@941 -- # uname 00:16:12.532 22:16:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.532 22:16:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75514 00:16:12.532 22:16:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:12.532 22:16:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:12.532 killing process with pid 75514 00:16:12.532 22:16:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75514' 00:16:12.532 22:16:08 -- common/autotest_common.sh@955 -- # kill 75514 00:16:12.532 22:16:08 -- common/autotest_common.sh@960 -- # wait 75514 00:16:12.532 22:16:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:12.532 22:16:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:12.532 22:16:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:12.532 22:16:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.532 22:16:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:12.532 22:16:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.532 22:16:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.532 22:16:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.532 22:16:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:12.532 00:16:12.532 real 0m24.850s 00:16:12.532 user 0m38.693s 00:16:12.532 sys 0m7.529s 00:16:12.532 22:16:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:12.532 22:16:08 -- common/autotest_common.sh@10 -- # set +x 00:16:12.532 ************************************ 00:16:12.532 END TEST nvmf_zcopy 00:16:12.532 ************************************ 00:16:12.532 22:16:09 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:12.532 22:16:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:12.532 22:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.532 22:16:09 -- common/autotest_common.sh@10 -- # set +x 00:16:12.532 ************************************ 00:16:12.532 START TEST nvmf_nmic 00:16:12.532 ************************************ 00:16:12.532 22:16:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:12.532 * Looking for test storage... 00:16:12.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:12.532 22:16:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:12.532 22:16:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:12.532 22:16:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:12.791 22:16:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:12.791 22:16:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:12.791 22:16:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:12.791 22:16:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:12.791 22:16:09 -- scripts/common.sh@335 -- # IFS=.-: 00:16:12.791 22:16:09 -- scripts/common.sh@335 -- # read -ra ver1 00:16:12.791 22:16:09 -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.791 22:16:09 -- scripts/common.sh@336 -- # read -ra ver2 00:16:12.791 22:16:09 -- scripts/common.sh@337 -- # local 'op=<' 00:16:12.791 22:16:09 -- scripts/common.sh@339 -- # ver1_l=2 00:16:12.791 22:16:09 -- scripts/common.sh@340 -- # ver2_l=1 00:16:12.791 22:16:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:12.791 22:16:09 -- scripts/common.sh@343 -- # case "$op" in 00:16:12.791 22:16:09 -- scripts/common.sh@344 -- # : 1 00:16:12.791 22:16:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:12.791 22:16:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.791 22:16:09 -- scripts/common.sh@364 -- # decimal 1 00:16:12.791 22:16:09 -- scripts/common.sh@352 -- # local d=1 00:16:12.791 22:16:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.791 22:16:09 -- scripts/common.sh@354 -- # echo 1 00:16:12.791 22:16:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:12.791 22:16:09 -- scripts/common.sh@365 -- # decimal 2 00:16:12.791 22:16:09 -- scripts/common.sh@352 -- # local d=2 00:16:12.791 22:16:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.791 22:16:09 -- scripts/common.sh@354 -- # echo 2 00:16:12.791 22:16:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:12.791 22:16:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:12.791 22:16:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:12.791 22:16:09 -- scripts/common.sh@367 -- # return 0 00:16:12.791 22:16:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.791 22:16:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.791 --rc genhtml_branch_coverage=1 00:16:12.791 --rc genhtml_function_coverage=1 00:16:12.791 --rc genhtml_legend=1 00:16:12.791 --rc geninfo_all_blocks=1 00:16:12.791 --rc geninfo_unexecuted_blocks=1 00:16:12.791 00:16:12.791 ' 00:16:12.791 22:16:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.791 --rc genhtml_branch_coverage=1 00:16:12.791 --rc genhtml_function_coverage=1 00:16:12.791 --rc genhtml_legend=1 00:16:12.791 --rc geninfo_all_blocks=1 00:16:12.791 --rc geninfo_unexecuted_blocks=1 00:16:12.791 00:16:12.791 ' 00:16:12.791 22:16:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.791 --rc genhtml_branch_coverage=1 00:16:12.791 --rc genhtml_function_coverage=1 00:16:12.791 --rc genhtml_legend=1 00:16:12.791 --rc geninfo_all_blocks=1 00:16:12.791 --rc geninfo_unexecuted_blocks=1 00:16:12.791 00:16:12.791 ' 00:16:12.791 22:16:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.791 --rc genhtml_branch_coverage=1 00:16:12.791 --rc genhtml_function_coverage=1 00:16:12.791 --rc genhtml_legend=1 00:16:12.791 --rc geninfo_all_blocks=1 00:16:12.791 --rc geninfo_unexecuted_blocks=1 00:16:12.791 00:16:12.791 ' 00:16:12.791 22:16:09 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.791 22:16:09 -- nvmf/common.sh@7 -- # uname -s 00:16:12.791 22:16:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.791 22:16:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.791 22:16:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.791 22:16:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.791 22:16:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.791 22:16:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.791 22:16:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.791 22:16:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.791 22:16:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.792 22:16:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.792 22:16:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:16:12.792 22:16:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:16:12.792 22:16:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.792 22:16:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.792 22:16:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.792 22:16:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.792 22:16:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.792 22:16:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.792 22:16:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.792 22:16:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.792 22:16:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.792 22:16:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.792 22:16:09 -- paths/export.sh@5 -- # export PATH 00:16:12.792 22:16:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.792 22:16:09 -- nvmf/common.sh@46 -- # : 0 00:16:12.792 22:16:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:12.792 22:16:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:12.792 22:16:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:12.792 22:16:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.792 22:16:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.792 22:16:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:12.792 22:16:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:12.792 22:16:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:12.792 22:16:09 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.792 22:16:09 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.792 22:16:09 -- target/nmic.sh@14 -- # nvmftestinit 00:16:12.792 22:16:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:12.792 22:16:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.792 22:16:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:12.792 22:16:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:12.792 22:16:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:12.792 22:16:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.792 22:16:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.792 22:16:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.792 22:16:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:12.792 22:16:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:12.792 22:16:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:12.792 22:16:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:12.792 22:16:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:12.792 22:16:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:12.792 22:16:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.792 22:16:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.792 22:16:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.792 22:16:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:12.792 22:16:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.792 22:16:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.792 22:16:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.792 22:16:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.792 22:16:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.792 22:16:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.792 22:16:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.792 22:16:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.792 22:16:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:12.792 22:16:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:12.792 Cannot find device "nvmf_tgt_br" 00:16:12.792 22:16:09 -- nvmf/common.sh@154 -- # true 00:16:12.792 22:16:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.792 Cannot find device "nvmf_tgt_br2" 00:16:12.792 22:16:09 -- nvmf/common.sh@155 -- # true 00:16:12.792 22:16:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:12.792 22:16:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:12.792 Cannot find device "nvmf_tgt_br" 00:16:12.792 22:16:09 -- nvmf/common.sh@157 -- # true 00:16:12.792 22:16:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:12.792 Cannot find device "nvmf_tgt_br2" 00:16:12.792 22:16:09 -- nvmf/common.sh@158 -- # true 00:16:12.792 22:16:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:12.792 22:16:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:12.792 22:16:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.792 22:16:09 -- nvmf/common.sh@161 -- # true 00:16:12.792 22:16:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.792 22:16:09 -- nvmf/common.sh@162 -- # true 00:16:12.792 22:16:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.792 22:16:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.792 22:16:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.792 22:16:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.792 22:16:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.050 22:16:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.050 22:16:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.050 22:16:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.050 22:16:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:13.050 22:16:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:13.050 22:16:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:13.050 22:16:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:13.050 22:16:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:13.051 22:16:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.051 22:16:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.051 22:16:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.051 22:16:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:13.051 22:16:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:13.051 22:16:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.051 22:16:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.051 22:16:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.051 22:16:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.051 22:16:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.051 22:16:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:13.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:13.051 00:16:13.051 --- 10.0.0.2 ping statistics --- 00:16:13.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.051 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:13.051 22:16:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:13.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:16:13.051 00:16:13.051 --- 10.0.0.3 ping statistics --- 00:16:13.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.051 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:13.051 22:16:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:13.051 00:16:13.051 --- 10.0.0.1 ping statistics --- 00:16:13.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.051 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:13.051 22:16:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.051 22:16:09 -- nvmf/common.sh@421 -- # return 0 00:16:13.051 22:16:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:13.051 22:16:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.051 22:16:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:13.051 22:16:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:13.051 22:16:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.051 22:16:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:13.051 22:16:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:13.051 22:16:09 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:13.051 22:16:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:13.051 22:16:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.051 22:16:09 -- common/autotest_common.sh@10 -- # set +x 00:16:13.051 22:16:09 -- nvmf/common.sh@469 -- # nvmfpid=76018 00:16:13.051 22:16:09 -- nvmf/common.sh@470 -- # waitforlisten 76018 00:16:13.051 22:16:09 -- common/autotest_common.sh@829 -- # '[' -z 76018 ']' 00:16:13.051 22:16:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.051 22:16:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.051 22:16:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.051 22:16:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.051 22:16:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.051 22:16:09 -- common/autotest_common.sh@10 -- # set +x 00:16:13.051 [2024-11-17 22:16:09.660112] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:13.051 [2024-11-17 22:16:09.660211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.310 [2024-11-17 22:16:09.799913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.310 [2024-11-17 22:16:09.892792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:13.310 [2024-11-17 22:16:09.893286] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.310 [2024-11-17 22:16:09.893316] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.310 [2024-11-17 22:16:09.893329] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.310 [2024-11-17 22:16:09.893861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.310 [2024-11-17 22:16:09.893935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.310 [2024-11-17 22:16:09.894019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.310 [2024-11-17 22:16:09.894031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.247 22:16:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.247 22:16:10 -- common/autotest_common.sh@862 -- # return 0 00:16:14.247 22:16:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:14.247 22:16:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 22:16:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.247 22:16:10 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.247 22:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 [2024-11-17 22:16:10.648255] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.247 22:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.247 22:16:10 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:14.247 22:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 Malloc0 00:16:14.247 22:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.247 22:16:10 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:14.247 22:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 22:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.247 22:16:10 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:14.247 22:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 22:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.247 22:16:10 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.247 22:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 [2024-11-17 22:16:10.711601] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.247 test case1: single bdev can't be used in multiple subsystems 00:16:14.247 22:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.247 22:16:10 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:14.247 22:16:10 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:14.247 22:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 22:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.247 22:16:10 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:14.247 22:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 22:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.247 22:16:10 -- target/nmic.sh@28 -- # nmic_status=0 00:16:14.247 22:16:10 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:14.247 22:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 [2024-11-17 22:16:10.735434] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:14.247 [2024-11-17 22:16:10.735466] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:14.247 [2024-11-17 22:16:10.735476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.247 2024/11/17 22:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.247 request: 00:16:14.247 { 00:16:14.247 "method": "nvmf_subsystem_add_ns", 00:16:14.247 "params": { 00:16:14.247 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:14.247 "namespace": { 00:16:14.247 "bdev_name": "Malloc0" 00:16:14.247 } 00:16:14.247 } 00:16:14.247 } 00:16:14.247 Got JSON-RPC error response 00:16:14.247 GoRPCClient: error on JSON-RPC call 00:16:14.247 22:16:10 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:14.247 22:16:10 -- target/nmic.sh@29 -- # nmic_status=1 00:16:14.247 22:16:10 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:14.247 22:16:10 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:14.247 Adding namespace failed - expected result. 00:16:14.247 22:16:10 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:14.247 test case2: host connect to nvmf target in multiple paths 00:16:14.247 22:16:10 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:14.247 22:16:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.247 22:16:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 [2024-11-17 22:16:10.747541] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:14.247 22:16:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.247 22:16:10 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.506 22:16:10 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:14.506 22:16:11 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:14.506 22:16:11 -- common/autotest_common.sh@1187 -- # local i=0 00:16:14.506 22:16:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.506 22:16:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:14.506 22:16:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:17.041 22:16:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:17.041 22:16:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:17.041 22:16:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.041 22:16:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:17.041 22:16:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.041 22:16:13 -- common/autotest_common.sh@1197 -- # return 0 00:16:17.041 22:16:13 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:17.041 [global] 00:16:17.041 thread=1 00:16:17.041 invalidate=1 00:16:17.041 rw=write 00:16:17.041 time_based=1 00:16:17.041 runtime=1 00:16:17.041 ioengine=libaio 00:16:17.041 direct=1 00:16:17.041 bs=4096 00:16:17.041 iodepth=1 00:16:17.041 norandommap=0 00:16:17.041 numjobs=1 00:16:17.041 00:16:17.041 verify_dump=1 00:16:17.041 verify_backlog=512 00:16:17.041 verify_state_save=0 00:16:17.041 do_verify=1 00:16:17.041 verify=crc32c-intel 00:16:17.041 [job0] 00:16:17.041 filename=/dev/nvme0n1 00:16:17.041 Could not set queue depth (nvme0n1) 00:16:17.041 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:17.041 fio-3.35 00:16:17.041 Starting 1 thread 00:16:17.977 00:16:17.978 job0: (groupid=0, jobs=1): err= 0: pid=76122: Sun Nov 17 22:16:14 2024 00:16:17.978 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:17.978 slat (nsec): min=12554, max=85978, avg=17385.13, stdev=5715.50 00:16:17.978 clat (usec): min=94, max=7818, avg=154.33, stdev=156.07 00:16:17.978 lat (usec): min=132, max=7833, avg=171.72, stdev=156.84 00:16:17.978 clat percentiles (usec): 00:16:17.978 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:16:17.978 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:16:17.978 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 172], 95.00th=[ 184], 00:16:17.978 | 99.00th=[ 221], 99.50th=[ 253], 99.90th=[ 1401], 99.95th=[ 2671], 00:16:17.978 | 99.99th=[ 7832] 00:16:17.978 write: IOPS=3456, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1001msec); 0 zone resets 00:16:17.978 slat (usec): min=18, max=513, avg=26.48, stdev=13.84 00:16:17.978 clat (usec): min=2, max=2955, avg=106.74, stdev=67.09 00:16:17.978 lat (usec): min=102, max=2986, avg=133.22, stdev=68.52 00:16:17.978 clat percentiles (usec): 00:16:17.978 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 94], 00:16:17.978 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 104], 00:16:17.978 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 125], 95.00th=[ 135], 00:16:17.978 | 99.00th=[ 159], 99.50th=[ 174], 99.90th=[ 515], 99.95th=[ 2540], 00:16:17.978 | 99.99th=[ 2966] 00:16:17.978 bw ( KiB/s): min=13445, max=13445, per=97.24%, avg=13445.00, stdev= 0.00, samples=1 00:16:17.978 iops : min= 3361, max= 3361, avg=3361.00, stdev= 0.00, samples=1 00:16:17.978 lat (usec) : 4=0.06%, 20=0.02%, 50=0.03%, 100=24.14%, 250=75.41% 00:16:17.978 lat (usec) : 500=0.09%, 750=0.08%, 1000=0.02% 00:16:17.978 lat (msec) : 2=0.09%, 4=0.05%, 10=0.02% 00:16:17.978 cpu : usr=2.30%, sys=10.40%, ctx=6554, majf=0, minf=5 00:16:17.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.978 issued rwts: total=3072,3460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.978 00:16:17.978 Run status group 0 (all jobs): 00:16:17.978 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:16:17.978 WRITE: bw=13.5MiB/s (14.2MB/s), 13.5MiB/s-13.5MiB/s (14.2MB/s-14.2MB/s), io=13.5MiB (14.2MB), run=1001-1001msec 00:16:17.978 00:16:17.978 Disk stats (read/write): 00:16:17.978 nvme0n1: ios=2795/3072, merge=0/0, ticks=464/357, in_queue=821, util=90.48% 00:16:17.978 22:16:14 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:18.237 22:16:14 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:18.237 22:16:14 -- common/autotest_common.sh@1208 -- # local i=0 00:16:18.237 22:16:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:18.237 22:16:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.237 22:16:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:18.237 22:16:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.237 22:16:14 -- common/autotest_common.sh@1220 -- # return 0 00:16:18.237 22:16:14 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:18.237 22:16:14 -- target/nmic.sh@53 -- # nvmftestfini 00:16:18.237 22:16:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:18.237 22:16:14 -- nvmf/common.sh@116 -- # sync 00:16:18.237 22:16:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:18.237 22:16:14 -- nvmf/common.sh@119 -- # set +e 00:16:18.237 22:16:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:18.237 22:16:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:18.237 rmmod nvme_tcp 00:16:18.237 rmmod nvme_fabrics 00:16:18.237 rmmod nvme_keyring 00:16:18.237 22:16:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:18.237 22:16:14 -- nvmf/common.sh@123 -- # set -e 00:16:18.237 22:16:14 -- nvmf/common.sh@124 -- # return 0 00:16:18.237 22:16:14 -- nvmf/common.sh@477 -- # '[' -n 76018 ']' 00:16:18.237 22:16:14 -- nvmf/common.sh@478 -- # killprocess 76018 00:16:18.237 22:16:14 -- common/autotest_common.sh@936 -- # '[' -z 76018 ']' 00:16:18.237 22:16:14 -- common/autotest_common.sh@940 -- # kill -0 76018 00:16:18.237 22:16:14 -- common/autotest_common.sh@941 -- # uname 00:16:18.237 22:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:18.237 22:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76018 00:16:18.237 killing process with pid 76018 00:16:18.237 22:16:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:18.237 22:16:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:18.237 22:16:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76018' 00:16:18.237 22:16:14 -- common/autotest_common.sh@955 -- # kill 76018 00:16:18.237 22:16:14 -- common/autotest_common.sh@960 -- # wait 76018 00:16:18.496 22:16:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:18.496 22:16:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:18.496 22:16:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:18.496 22:16:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.496 22:16:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:18.496 22:16:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.496 22:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.496 22:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.496 22:16:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:18.496 ************************************ 00:16:18.496 END TEST nvmf_nmic 00:16:18.496 ************************************ 00:16:18.496 00:16:18.496 real 0m5.989s 00:16:18.496 user 0m20.105s 00:16:18.496 sys 0m1.260s 00:16:18.496 22:16:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:18.496 22:16:15 -- common/autotest_common.sh@10 -- # set +x 00:16:18.496 22:16:15 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:18.496 22:16:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:18.496 22:16:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.496 22:16:15 -- common/autotest_common.sh@10 -- # set +x 00:16:18.496 ************************************ 00:16:18.496 START TEST nvmf_fio_target 00:16:18.496 ************************************ 00:16:18.496 22:16:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:18.755 * Looking for test storage... 00:16:18.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:18.755 22:16:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:18.755 22:16:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:18.755 22:16:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:18.755 22:16:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:18.755 22:16:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:18.755 22:16:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:18.755 22:16:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:18.755 22:16:15 -- scripts/common.sh@335 -- # IFS=.-: 00:16:18.755 22:16:15 -- scripts/common.sh@335 -- # read -ra ver1 00:16:18.755 22:16:15 -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.755 22:16:15 -- scripts/common.sh@336 -- # read -ra ver2 00:16:18.755 22:16:15 -- scripts/common.sh@337 -- # local 'op=<' 00:16:18.755 22:16:15 -- scripts/common.sh@339 -- # ver1_l=2 00:16:18.755 22:16:15 -- scripts/common.sh@340 -- # ver2_l=1 00:16:18.755 22:16:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:18.755 22:16:15 -- scripts/common.sh@343 -- # case "$op" in 00:16:18.755 22:16:15 -- scripts/common.sh@344 -- # : 1 00:16:18.756 22:16:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:18.756 22:16:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.756 22:16:15 -- scripts/common.sh@364 -- # decimal 1 00:16:18.756 22:16:15 -- scripts/common.sh@352 -- # local d=1 00:16:18.756 22:16:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.756 22:16:15 -- scripts/common.sh@354 -- # echo 1 00:16:18.756 22:16:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:18.756 22:16:15 -- scripts/common.sh@365 -- # decimal 2 00:16:18.756 22:16:15 -- scripts/common.sh@352 -- # local d=2 00:16:18.756 22:16:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.756 22:16:15 -- scripts/common.sh@354 -- # echo 2 00:16:18.756 22:16:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:18.756 22:16:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:18.756 22:16:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:18.756 22:16:15 -- scripts/common.sh@367 -- # return 0 00:16:18.756 22:16:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.756 22:16:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:18.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.756 --rc genhtml_branch_coverage=1 00:16:18.756 --rc genhtml_function_coverage=1 00:16:18.756 --rc genhtml_legend=1 00:16:18.756 --rc geninfo_all_blocks=1 00:16:18.756 --rc geninfo_unexecuted_blocks=1 00:16:18.756 00:16:18.756 ' 00:16:18.756 22:16:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:18.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.756 --rc genhtml_branch_coverage=1 00:16:18.756 --rc genhtml_function_coverage=1 00:16:18.756 --rc genhtml_legend=1 00:16:18.756 --rc geninfo_all_blocks=1 00:16:18.756 --rc geninfo_unexecuted_blocks=1 00:16:18.756 00:16:18.756 ' 00:16:18.756 22:16:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:18.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.756 --rc genhtml_branch_coverage=1 00:16:18.756 --rc genhtml_function_coverage=1 00:16:18.756 --rc genhtml_legend=1 00:16:18.756 --rc geninfo_all_blocks=1 00:16:18.756 --rc geninfo_unexecuted_blocks=1 00:16:18.756 00:16:18.756 ' 00:16:18.756 22:16:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:18.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.756 --rc genhtml_branch_coverage=1 00:16:18.756 --rc genhtml_function_coverage=1 00:16:18.756 --rc genhtml_legend=1 00:16:18.756 --rc geninfo_all_blocks=1 00:16:18.756 --rc geninfo_unexecuted_blocks=1 00:16:18.756 00:16:18.756 ' 00:16:18.756 22:16:15 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:18.756 22:16:15 -- nvmf/common.sh@7 -- # uname -s 00:16:18.756 22:16:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.756 22:16:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.756 22:16:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.756 22:16:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.756 22:16:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.756 22:16:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.756 22:16:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.756 22:16:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.756 22:16:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.756 22:16:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.756 22:16:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:16:18.756 22:16:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:16:18.756 22:16:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.756 22:16:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.756 22:16:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:18.756 22:16:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:18.756 22:16:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.756 22:16:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.756 22:16:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.756 22:16:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.756 22:16:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.756 22:16:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.756 22:16:15 -- paths/export.sh@5 -- # export PATH 00:16:18.756 22:16:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.756 22:16:15 -- nvmf/common.sh@46 -- # : 0 00:16:18.756 22:16:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:18.756 22:16:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:18.756 22:16:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:18.756 22:16:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.756 22:16:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.756 22:16:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:18.756 22:16:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:18.756 22:16:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:18.756 22:16:15 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:18.756 22:16:15 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:18.756 22:16:15 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.756 22:16:15 -- target/fio.sh@16 -- # nvmftestinit 00:16:18.756 22:16:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:18.756 22:16:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.756 22:16:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:18.756 22:16:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:18.756 22:16:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:18.756 22:16:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.756 22:16:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.756 22:16:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.756 22:16:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:18.756 22:16:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:18.756 22:16:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:18.756 22:16:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:18.756 22:16:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:18.756 22:16:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:18.756 22:16:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.756 22:16:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.756 22:16:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:18.756 22:16:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:18.756 22:16:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:18.756 22:16:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:18.756 22:16:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:18.756 22:16:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.756 22:16:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:18.756 22:16:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:18.756 22:16:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:18.756 22:16:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:18.756 22:16:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:18.756 22:16:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:18.756 Cannot find device "nvmf_tgt_br" 00:16:18.756 22:16:15 -- nvmf/common.sh@154 -- # true 00:16:18.756 22:16:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.756 Cannot find device "nvmf_tgt_br2" 00:16:18.756 22:16:15 -- nvmf/common.sh@155 -- # true 00:16:18.756 22:16:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:18.756 22:16:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:18.756 Cannot find device "nvmf_tgt_br" 00:16:18.756 22:16:15 -- nvmf/common.sh@157 -- # true 00:16:18.756 22:16:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:18.756 Cannot find device "nvmf_tgt_br2" 00:16:18.756 22:16:15 -- nvmf/common.sh@158 -- # true 00:16:18.756 22:16:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:19.015 22:16:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:19.015 22:16:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.015 22:16:15 -- nvmf/common.sh@161 -- # true 00:16:19.015 22:16:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.015 22:16:15 -- nvmf/common.sh@162 -- # true 00:16:19.015 22:16:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.015 22:16:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.015 22:16:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.015 22:16:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.015 22:16:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.015 22:16:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.015 22:16:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.015 22:16:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:19.015 22:16:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:19.015 22:16:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:19.015 22:16:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:19.015 22:16:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:19.015 22:16:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:19.015 22:16:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.015 22:16:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.015 22:16:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.015 22:16:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:19.015 22:16:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:19.015 22:16:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.015 22:16:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.015 22:16:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.015 22:16:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.015 22:16:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.015 22:16:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:19.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:19.016 00:16:19.016 --- 10.0.0.2 ping statistics --- 00:16:19.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.016 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:19.016 22:16:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:19.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:19.016 00:16:19.016 --- 10.0.0.3 ping statistics --- 00:16:19.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.016 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:19.016 22:16:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:19.016 00:16:19.016 --- 10.0.0.1 ping statistics --- 00:16:19.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.016 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:19.016 22:16:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.016 22:16:15 -- nvmf/common.sh@421 -- # return 0 00:16:19.016 22:16:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:19.016 22:16:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.016 22:16:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:19.016 22:16:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:19.016 22:16:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.016 22:16:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:19.016 22:16:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:19.016 22:16:15 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:19.016 22:16:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:19.016 22:16:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.016 22:16:15 -- common/autotest_common.sh@10 -- # set +x 00:16:19.016 22:16:15 -- nvmf/common.sh@469 -- # nvmfpid=76316 00:16:19.016 22:16:15 -- nvmf/common.sh@470 -- # waitforlisten 76316 00:16:19.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.016 22:16:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.016 22:16:15 -- common/autotest_common.sh@829 -- # '[' -z 76316 ']' 00:16:19.016 22:16:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.016 22:16:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.016 22:16:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.016 22:16:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.016 22:16:15 -- common/autotest_common.sh@10 -- # set +x 00:16:19.275 [2024-11-17 22:16:15.672148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:19.275 [2024-11-17 22:16:15.672238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.275 [2024-11-17 22:16:15.815108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.533 [2024-11-17 22:16:15.921097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:19.533 [2024-11-17 22:16:15.921568] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.533 [2024-11-17 22:16:15.921724] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.533 [2024-11-17 22:16:15.921931] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.533 [2024-11-17 22:16:15.924110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.533 [2024-11-17 22:16:15.924266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.534 [2024-11-17 22:16:15.924364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.534 [2024-11-17 22:16:15.924368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.101 22:16:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.101 22:16:16 -- common/autotest_common.sh@862 -- # return 0 00:16:20.101 22:16:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:20.101 22:16:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.101 22:16:16 -- common/autotest_common.sh@10 -- # set +x 00:16:20.360 22:16:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.360 22:16:16 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:20.360 [2024-11-17 22:16:16.933732] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.360 22:16:16 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:20.930 22:16:17 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:20.930 22:16:17 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:20.930 22:16:17 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:20.930 22:16:17 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.189 22:16:17 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:21.189 22:16:17 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.448 22:16:18 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:21.448 22:16:18 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:21.707 22:16:18 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.966 22:16:18 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:21.966 22:16:18 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:22.225 22:16:18 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:22.225 22:16:18 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:22.484 22:16:18 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:22.484 22:16:18 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:22.743 22:16:19 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:23.002 22:16:19 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:23.002 22:16:19 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:23.002 22:16:19 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:23.002 22:16:19 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:23.260 22:16:19 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.519 [2024-11-17 22:16:19.968405] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.519 22:16:19 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:23.778 22:16:20 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:24.038 22:16:20 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.038 22:16:20 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:24.038 22:16:20 -- common/autotest_common.sh@1187 -- # local i=0 00:16:24.038 22:16:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.038 22:16:20 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:24.038 22:16:20 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:24.038 22:16:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:26.573 22:16:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:26.573 22:16:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:26.573 22:16:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.573 22:16:22 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:26.573 22:16:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.573 22:16:22 -- common/autotest_common.sh@1197 -- # return 0 00:16:26.573 22:16:22 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:26.573 [global] 00:16:26.573 thread=1 00:16:26.573 invalidate=1 00:16:26.573 rw=write 00:16:26.573 time_based=1 00:16:26.573 runtime=1 00:16:26.573 ioengine=libaio 00:16:26.573 direct=1 00:16:26.573 bs=4096 00:16:26.573 iodepth=1 00:16:26.573 norandommap=0 00:16:26.573 numjobs=1 00:16:26.573 00:16:26.573 verify_dump=1 00:16:26.573 verify_backlog=512 00:16:26.573 verify_state_save=0 00:16:26.573 do_verify=1 00:16:26.573 verify=crc32c-intel 00:16:26.573 [job0] 00:16:26.573 filename=/dev/nvme0n1 00:16:26.573 [job1] 00:16:26.573 filename=/dev/nvme0n2 00:16:26.573 [job2] 00:16:26.573 filename=/dev/nvme0n3 00:16:26.573 [job3] 00:16:26.573 filename=/dev/nvme0n4 00:16:26.573 Could not set queue depth (nvme0n1) 00:16:26.573 Could not set queue depth (nvme0n2) 00:16:26.573 Could not set queue depth (nvme0n3) 00:16:26.573 Could not set queue depth (nvme0n4) 00:16:26.573 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.573 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.573 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.573 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:26.573 fio-3.35 00:16:26.573 Starting 4 threads 00:16:27.510 00:16:27.510 job0: (groupid=0, jobs=1): err= 0: pid=76605: Sun Nov 17 22:16:23 2024 00:16:27.510 read: IOPS=2510, BW=9.81MiB/s (10.3MB/s)(9.82MiB/1001msec) 00:16:27.510 slat (nsec): min=12674, max=46737, avg=15324.67, stdev=2838.88 00:16:27.510 clat (usec): min=143, max=450, avg=187.86, stdev=24.64 00:16:27.510 lat (usec): min=157, max=467, avg=203.19, stdev=25.13 00:16:27.510 clat percentiles (usec): 00:16:27.510 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:16:27.510 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:16:27.510 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 233], 00:16:27.510 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 388], 99.95th=[ 404], 00:16:27.510 | 99.99th=[ 449] 00:16:27.510 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:27.510 slat (usec): min=18, max=105, avg=24.34, stdev= 5.98 00:16:27.510 clat (usec): min=106, max=296, avg=163.41, stdev=22.07 00:16:27.510 lat (usec): min=126, max=333, avg=187.75, stdev=23.63 00:16:27.510 clat percentiles (usec): 00:16:27.510 | 1.00th=[ 126], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 147], 00:16:27.510 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:16:27.510 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 194], 95.00th=[ 206], 00:16:27.510 | 99.00th=[ 231], 99.50th=[ 245], 99.90th=[ 289], 99.95th=[ 297], 00:16:27.510 | 99.99th=[ 297] 00:16:27.510 bw ( KiB/s): min=11896, max=11896, per=36.34%, avg=11896.00, stdev= 0.00, samples=1 00:16:27.510 iops : min= 2974, max= 2974, avg=2974.00, stdev= 0.00, samples=1 00:16:27.510 lat (usec) : 250=98.68%, 500=1.32% 00:16:27.510 cpu : usr=1.80%, sys=7.70%, ctx=5074, majf=0, minf=11 00:16:27.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.510 issued rwts: total=2513,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.510 job1: (groupid=0, jobs=1): err= 0: pid=76606: Sun Nov 17 22:16:23 2024 00:16:27.510 read: IOPS=2521, BW=9.85MiB/s (10.3MB/s)(9.86MiB/1001msec) 00:16:27.510 slat (nsec): min=12946, max=50216, avg=15792.67, stdev=3576.01 00:16:27.510 clat (usec): min=151, max=358, avg=186.83, stdev=20.53 00:16:27.510 lat (usec): min=165, max=373, avg=202.63, stdev=21.50 00:16:27.510 clat percentiles (usec): 00:16:27.510 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:16:27.510 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:16:27.510 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 227], 00:16:27.510 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 310], 99.95th=[ 314], 00:16:27.510 | 99.99th=[ 359] 00:16:27.510 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:27.510 slat (usec): min=19, max=110, avg=24.84, stdev= 6.04 00:16:27.510 clat (usec): min=123, max=309, avg=162.82, stdev=20.24 00:16:27.510 lat (usec): min=146, max=357, avg=187.66, stdev=22.24 00:16:27.510 clat percentiles (usec): 00:16:27.510 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:16:27.510 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:16:27.510 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 202], 00:16:27.510 | 99.00th=[ 231], 99.50th=[ 249], 99.90th=[ 289], 99.95th=[ 293], 00:16:27.510 | 99.99th=[ 310] 00:16:27.510 bw ( KiB/s): min=11984, max=11984, per=36.61%, avg=11984.00, stdev= 0.00, samples=1 00:16:27.510 iops : min= 2996, max= 2996, avg=2996.00, stdev= 0.00, samples=1 00:16:27.510 lat (usec) : 250=99.21%, 500=0.79% 00:16:27.510 cpu : usr=1.60%, sys=7.60%, ctx=5085, majf=0, minf=9 00:16:27.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.510 issued rwts: total=2524,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.510 job2: (groupid=0, jobs=1): err= 0: pid=76607: Sun Nov 17 22:16:23 2024 00:16:27.510 read: IOPS=1191, BW=4767KiB/s (4882kB/s)(4772KiB/1001msec) 00:16:27.510 slat (nsec): min=16453, max=88385, avg=21280.42, stdev=5744.00 00:16:27.510 clat (usec): min=182, max=3335, avg=374.19, stdev=93.92 00:16:27.510 lat (usec): min=201, max=3371, avg=395.47, stdev=94.75 00:16:27.510 clat percentiles (usec): 00:16:27.510 | 1.00th=[ 306], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 347], 00:16:27.510 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 371], 00:16:27.510 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 433], 00:16:27.510 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 644], 99.95th=[ 3326], 00:16:27.511 | 99.99th=[ 3326] 00:16:27.511 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:27.511 slat (usec): min=26, max=101, avg=42.61, stdev= 7.93 00:16:27.511 clat (usec): min=144, max=942, avg=296.41, stdev=56.47 00:16:27.511 lat (usec): min=177, max=990, avg=339.02, stdev=56.55 00:16:27.511 clat percentiles (usec): 00:16:27.511 | 1.00th=[ 180], 5.00th=[ 221], 10.00th=[ 235], 20.00th=[ 253], 00:16:27.511 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 302], 00:16:27.511 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 379], 95.00th=[ 396], 00:16:27.511 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 478], 99.95th=[ 938], 00:16:27.511 | 99.99th=[ 938] 00:16:27.511 bw ( KiB/s): min= 6840, max= 6840, per=20.89%, avg=6840.00, stdev= 0.00, samples=1 00:16:27.511 iops : min= 1710, max= 1710, avg=1710.00, stdev= 0.00, samples=1 00:16:27.511 lat (usec) : 250=10.15%, 500=89.19%, 750=0.59%, 1000=0.04% 00:16:27.511 lat (msec) : 4=0.04% 00:16:27.511 cpu : usr=1.80%, sys=6.80%, ctx=2729, majf=0, minf=12 00:16:27.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.511 issued rwts: total=1193,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.511 job3: (groupid=0, jobs=1): err= 0: pid=76608: Sun Nov 17 22:16:23 2024 00:16:27.511 read: IOPS=1188, BW=4755KiB/s (4869kB/s)(4760KiB/1001msec) 00:16:27.511 slat (usec): min=17, max=237, avg=27.99, stdev=10.67 00:16:27.511 clat (usec): min=203, max=2477, avg=363.34, stdev=71.38 00:16:27.511 lat (usec): min=230, max=2502, avg=391.33, stdev=72.22 00:16:27.511 clat percentiles (usec): 00:16:27.511 | 1.00th=[ 277], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 338], 00:16:27.511 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 363], 00:16:27.511 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 400], 95.00th=[ 420], 00:16:27.511 | 99.00th=[ 469], 99.50th=[ 529], 99.90th=[ 906], 99.95th=[ 2474], 00:16:27.511 | 99.99th=[ 2474] 00:16:27.511 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:27.511 slat (usec): min=27, max=270, avg=42.93, stdev=14.47 00:16:27.511 clat (usec): min=80, max=3803, avg=299.31, stdev=104.07 00:16:27.511 lat (usec): min=195, max=3844, avg=342.24, stdev=104.54 00:16:27.511 clat percentiles (usec): 00:16:27.511 | 1.00th=[ 174], 5.00th=[ 219], 10.00th=[ 235], 20.00th=[ 258], 00:16:27.511 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 302], 00:16:27.511 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 375], 95.00th=[ 392], 00:16:27.511 | 99.00th=[ 437], 99.50th=[ 465], 99.90th=[ 562], 99.95th=[ 3818], 00:16:27.511 | 99.99th=[ 3818] 00:16:27.511 bw ( KiB/s): min= 6704, max= 6704, per=20.48%, avg=6704.00, stdev= 0.00, samples=1 00:16:27.511 iops : min= 1676, max= 1676, avg=1676.00, stdev= 0.00, samples=1 00:16:27.511 lat (usec) : 100=0.04%, 250=9.02%, 500=90.57%, 750=0.26%, 1000=0.04% 00:16:27.511 lat (msec) : 4=0.07% 00:16:27.511 cpu : usr=1.70%, sys=7.60%, ctx=2744, majf=0, minf=5 00:16:27.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:27.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.511 issued rwts: total=1190,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:27.511 00:16:27.511 Run status group 0 (all jobs): 00:16:27.511 READ: bw=29.0MiB/s (30.4MB/s), 4755KiB/s-9.85MiB/s (4869kB/s-10.3MB/s), io=29.0MiB (30.4MB), run=1001-1001msec 00:16:27.511 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:16:27.511 00:16:27.511 Disk stats (read/write): 00:16:27.511 nvme0n1: ios=2098/2212, merge=0/0, ticks=423/390, in_queue=813, util=86.77% 00:16:27.511 nvme0n2: ios=2072/2224, merge=0/0, ticks=420/389, in_queue=809, util=87.23% 00:16:27.511 nvme0n3: ios=1024/1276, merge=0/0, ticks=385/391, in_queue=776, util=88.56% 00:16:27.511 nvme0n4: ios=1024/1270, merge=0/0, ticks=381/390, in_queue=771, util=89.43% 00:16:27.511 22:16:24 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:27.511 [global] 00:16:27.511 thread=1 00:16:27.511 invalidate=1 00:16:27.511 rw=randwrite 00:16:27.511 time_based=1 00:16:27.511 runtime=1 00:16:27.511 ioengine=libaio 00:16:27.511 direct=1 00:16:27.511 bs=4096 00:16:27.511 iodepth=1 00:16:27.511 norandommap=0 00:16:27.511 numjobs=1 00:16:27.511 00:16:27.511 verify_dump=1 00:16:27.511 verify_backlog=512 00:16:27.511 verify_state_save=0 00:16:27.511 do_verify=1 00:16:27.511 verify=crc32c-intel 00:16:27.511 [job0] 00:16:27.511 filename=/dev/nvme0n1 00:16:27.511 [job1] 00:16:27.511 filename=/dev/nvme0n2 00:16:27.511 [job2] 00:16:27.511 filename=/dev/nvme0n3 00:16:27.511 [job3] 00:16:27.511 filename=/dev/nvme0n4 00:16:27.511 Could not set queue depth (nvme0n1) 00:16:27.511 Could not set queue depth (nvme0n2) 00:16:27.511 Could not set queue depth (nvme0n3) 00:16:27.511 Could not set queue depth (nvme0n4) 00:16:27.770 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.770 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.770 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.770 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.770 fio-3.35 00:16:27.770 Starting 4 threads 00:16:29.192 00:16:29.192 job0: (groupid=0, jobs=1): err= 0: pid=76665: Sun Nov 17 22:16:25 2024 00:16:29.192 read: IOPS=1571, BW=6286KiB/s (6437kB/s)(6292KiB/1001msec) 00:16:29.192 slat (nsec): min=12693, max=42721, avg=15881.03, stdev=3988.62 00:16:29.192 clat (usec): min=165, max=633, avg=293.23, stdev=29.13 00:16:29.192 lat (usec): min=180, max=655, avg=309.11, stdev=29.76 00:16:29.192 clat percentiles (usec): 00:16:29.192 | 1.00th=[ 253], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:16:29.192 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:16:29.192 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 334], 00:16:29.192 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 490], 99.95th=[ 635], 00:16:29.192 | 99.99th=[ 635] 00:16:29.192 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:29.192 slat (nsec): min=19166, max=72346, avg=24982.78, stdev=5363.74 00:16:29.192 clat (usec): min=133, max=1089, avg=222.76, stdev=28.34 00:16:29.192 lat (usec): min=159, max=1111, avg=247.74, stdev=28.36 00:16:29.192 clat percentiles (usec): 00:16:29.192 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:16:29.192 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:16:29.192 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 258], 00:16:29.192 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 363], 99.95th=[ 486], 00:16:29.192 | 99.99th=[ 1090] 00:16:29.192 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:16:29.192 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:29.192 lat (usec) : 250=52.00%, 500=47.94%, 750=0.03% 00:16:29.192 lat (msec) : 2=0.03% 00:16:29.192 cpu : usr=1.30%, sys=5.50%, ctx=3621, majf=0, minf=9 00:16:29.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.192 issued rwts: total=1573,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.192 job1: (groupid=0, jobs=1): err= 0: pid=76666: Sun Nov 17 22:16:25 2024 00:16:29.192 read: IOPS=1601, BW=6406KiB/s (6559kB/s)(6412KiB/1001msec) 00:16:29.192 slat (nsec): min=13495, max=53446, avg=19600.54, stdev=4697.09 00:16:29.192 clat (usec): min=153, max=500, avg=282.99, stdev=23.61 00:16:29.192 lat (usec): min=167, max=516, avg=302.59, stdev=24.03 00:16:29.192 clat percentiles (usec): 00:16:29.192 | 1.00th=[ 237], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:16:29.192 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:16:29.192 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 322], 00:16:29.192 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 396], 99.95th=[ 502], 00:16:29.192 | 99.99th=[ 502] 00:16:29.192 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:29.192 slat (nsec): min=18656, max=91823, avg=26862.29, stdev=6801.61 00:16:29.192 clat (usec): min=112, max=3651, avg=221.10, stdev=79.12 00:16:29.192 lat (usec): min=137, max=3676, avg=247.96, stdev=79.17 00:16:29.192 clat percentiles (usec): 00:16:29.192 | 1.00th=[ 180], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 204], 00:16:29.192 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:16:29.192 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 253], 00:16:29.192 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 375], 99.95th=[ 709], 00:16:29.192 | 99.99th=[ 3654] 00:16:29.192 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:16:29.192 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:29.192 lat (usec) : 250=54.18%, 500=45.74%, 750=0.05% 00:16:29.192 lat (msec) : 4=0.03% 00:16:29.192 cpu : usr=2.10%, sys=5.80%, ctx=3661, majf=0, minf=7 00:16:29.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.192 issued rwts: total=1603,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.192 job2: (groupid=0, jobs=1): err= 0: pid=76667: Sun Nov 17 22:16:25 2024 00:16:29.192 read: IOPS=1597, BW=6390KiB/s (6543kB/s)(6396KiB/1001msec) 00:16:29.192 slat (nsec): min=12758, max=47141, avg=16885.78, stdev=4090.12 00:16:29.192 clat (usec): min=239, max=1547, avg=288.68, stdev=40.56 00:16:29.192 lat (usec): min=259, max=1562, avg=305.56, stdev=40.80 00:16:29.192 clat percentiles (usec): 00:16:29.192 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:16:29.192 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:16:29.192 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:16:29.192 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 775], 99.95th=[ 1549], 00:16:29.192 | 99.99th=[ 1549] 00:16:29.192 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:29.192 slat (usec): min=18, max=105, avg=27.43, stdev= 7.83 00:16:29.192 clat (usec): min=114, max=1881, avg=218.93, stdev=41.21 00:16:29.192 lat (usec): min=134, max=1910, avg=246.36, stdev=41.30 00:16:29.192 clat percentiles (usec): 00:16:29.192 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:16:29.192 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:16:29.192 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 249], 00:16:29.192 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 293], 99.95th=[ 314], 00:16:29.192 | 99.99th=[ 1876] 00:16:29.192 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:16:29.192 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:29.192 lat (usec) : 250=53.74%, 500=46.17%, 1000=0.03% 00:16:29.192 lat (msec) : 2=0.05% 00:16:29.192 cpu : usr=1.30%, sys=6.40%, ctx=3649, majf=0, minf=19 00:16:29.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.192 issued rwts: total=1599,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.192 job3: (groupid=0, jobs=1): err= 0: pid=76668: Sun Nov 17 22:16:25 2024 00:16:29.192 read: IOPS=1606, BW=6426KiB/s (6580kB/s)(6432KiB/1001msec) 00:16:29.192 slat (nsec): min=11840, max=46920, avg=17468.79, stdev=4132.42 00:16:29.192 clat (usec): min=175, max=687, avg=287.46, stdev=26.57 00:16:29.192 lat (usec): min=187, max=703, avg=304.93, stdev=26.70 00:16:29.192 clat percentiles (usec): 00:16:29.192 | 1.00th=[ 239], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 269], 00:16:29.192 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:16:29.192 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 326], 00:16:29.192 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 603], 99.95th=[ 685], 00:16:29.192 | 99.99th=[ 685] 00:16:29.192 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:29.192 slat (nsec): min=17506, max=92485, avg=25628.81, stdev=6471.32 00:16:29.192 clat (usec): min=120, max=525, avg=219.73, stdev=21.22 00:16:29.192 lat (usec): min=139, max=553, avg=245.36, stdev=21.57 00:16:29.192 clat percentiles (usec): 00:16:29.192 | 1.00th=[ 153], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:16:29.192 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:16:29.192 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 253], 00:16:29.192 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 293], 99.95th=[ 297], 00:16:29.192 | 99.99th=[ 529] 00:16:29.192 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:16:29.192 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:29.192 lat (usec) : 250=53.61%, 500=46.31%, 750=0.08% 00:16:29.192 cpu : usr=1.90%, sys=5.80%, ctx=3656, majf=0, minf=11 00:16:29.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.192 issued rwts: total=1608,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.192 00:16:29.192 Run status group 0 (all jobs): 00:16:29.192 READ: bw=24.9MiB/s (26.1MB/s), 6286KiB/s-6426KiB/s (6437kB/s-6580kB/s), io=24.9MiB (26.1MB), run=1001-1001msec 00:16:29.192 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:16:29.192 00:16:29.192 Disk stats (read/write): 00:16:29.192 nvme0n1: ios=1576/1536, merge=0/0, ticks=485/357, in_queue=842, util=87.56% 00:16:29.192 nvme0n2: ios=1562/1547, merge=0/0, ticks=450/353, in_queue=803, util=87.98% 00:16:29.192 nvme0n3: ios=1536/1551, merge=0/0, ticks=455/364, in_queue=819, util=89.10% 00:16:29.192 nvme0n4: ios=1536/1553, merge=0/0, ticks=453/353, in_queue=806, util=89.66% 00:16:29.192 22:16:25 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:29.192 [global] 00:16:29.192 thread=1 00:16:29.192 invalidate=1 00:16:29.192 rw=write 00:16:29.192 time_based=1 00:16:29.193 runtime=1 00:16:29.193 ioengine=libaio 00:16:29.193 direct=1 00:16:29.193 bs=4096 00:16:29.193 iodepth=128 00:16:29.193 norandommap=0 00:16:29.193 numjobs=1 00:16:29.193 00:16:29.193 verify_dump=1 00:16:29.193 verify_backlog=512 00:16:29.193 verify_state_save=0 00:16:29.193 do_verify=1 00:16:29.193 verify=crc32c-intel 00:16:29.193 [job0] 00:16:29.193 filename=/dev/nvme0n1 00:16:29.193 [job1] 00:16:29.193 filename=/dev/nvme0n2 00:16:29.193 [job2] 00:16:29.193 filename=/dev/nvme0n3 00:16:29.193 [job3] 00:16:29.193 filename=/dev/nvme0n4 00:16:29.193 Could not set queue depth (nvme0n1) 00:16:29.193 Could not set queue depth (nvme0n2) 00:16:29.193 Could not set queue depth (nvme0n3) 00:16:29.193 Could not set queue depth (nvme0n4) 00:16:29.193 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.193 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.193 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.193 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.193 fio-3.35 00:16:29.193 Starting 4 threads 00:16:30.585 00:16:30.585 job0: (groupid=0, jobs=1): err= 0: pid=76722: Sun Nov 17 22:16:26 2024 00:16:30.585 read: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1004msec) 00:16:30.585 slat (usec): min=2, max=5837, avg=121.61, stdev=556.40 00:16:30.585 clat (usec): min=703, max=26764, avg=15798.23, stdev=2941.44 00:16:30.585 lat (usec): min=3200, max=26788, avg=15919.84, stdev=2916.89 00:16:30.585 clat percentiles (usec): 00:16:30.585 | 1.00th=[ 6849], 5.00th=[12256], 10.00th=[13566], 20.00th=[14353], 00:16:30.585 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15533], 00:16:30.585 | 70.00th=[16057], 80.00th=[17171], 90.00th=[20317], 95.00th=[21103], 00:16:30.585 | 99.00th=[26084], 99.50th=[26608], 99.90th=[26870], 99.95th=[26870], 00:16:30.585 | 99.99th=[26870] 00:16:30.585 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:16:30.585 slat (usec): min=4, max=5634, avg=119.87, stdev=551.74 00:16:30.585 clat (usec): min=11341, max=24593, avg=15765.90, stdev=2389.16 00:16:30.585 lat (usec): min=11367, max=24625, avg=15885.77, stdev=2386.08 00:16:30.585 clat percentiles (usec): 00:16:30.585 | 1.00th=[11731], 5.00th=[12649], 10.00th=[12911], 20.00th=[13435], 00:16:30.585 | 30.00th=[13960], 40.00th=[14877], 50.00th=[15664], 60.00th=[16319], 00:16:30.585 | 70.00th=[16909], 80.00th=[17433], 90.00th=[19530], 95.00th=[20055], 00:16:30.585 | 99.00th=[21890], 99.50th=[22414], 99.90th=[24511], 99.95th=[24511], 00:16:30.585 | 99.99th=[24511] 00:16:30.585 bw ( KiB/s): min=16884, max=16884, per=33.04%, avg=16884.00, stdev= 0.00, samples=1 00:16:30.585 iops : min= 4221, max= 4221, avg=4221.00, stdev= 0.00, samples=1 00:16:30.585 lat (usec) : 750=0.01% 00:16:30.585 lat (msec) : 4=0.24%, 10=0.40%, 20=91.95%, 50=7.40% 00:16:30.585 cpu : usr=4.59%, sys=10.27%, ctx=660, majf=0, minf=13 00:16:30.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:30.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.585 issued rwts: total=3928,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.585 job1: (groupid=0, jobs=1): err= 0: pid=76723: Sun Nov 17 22:16:26 2024 00:16:30.585 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:30.585 slat (usec): min=6, max=14614, avg=157.53, stdev=991.12 00:16:30.585 clat (usec): min=9512, max=45462, avg=19928.06, stdev=6644.56 00:16:30.585 lat (usec): min=9546, max=45482, avg=20085.59, stdev=6741.17 00:16:30.585 clat percentiles (usec): 00:16:30.585 | 1.00th=[10159], 5.00th=[11207], 10.00th=[11600], 20.00th=[12911], 00:16:30.585 | 30.00th=[13960], 40.00th=[19006], 50.00th=[20579], 60.00th=[21890], 00:16:30.585 | 70.00th=[22938], 80.00th=[25035], 90.00th=[29754], 95.00th=[30278], 00:16:30.585 | 99.00th=[38536], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:16:30.585 | 99.99th=[45351] 00:16:30.585 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(13.4MiB/1001msec); 0 zone resets 00:16:30.585 slat (usec): min=7, max=10723, avg=141.80, stdev=829.11 00:16:30.585 clat (usec): min=618, max=46792, avg=18876.99, stdev=7272.36 00:16:30.585 lat (usec): min=678, max=46823, avg=19018.79, stdev=7345.38 00:16:30.585 clat percentiles (usec): 00:16:30.585 | 1.00th=[ 6456], 5.00th=[10552], 10.00th=[11076], 20.00th=[12649], 00:16:30.585 | 30.00th=[13698], 40.00th=[14746], 50.00th=[18482], 60.00th=[19268], 00:16:30.585 | 70.00th=[20317], 80.00th=[23725], 90.00th=[31589], 95.00th=[32375], 00:16:30.585 | 99.00th=[39060], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:16:30.585 | 99.99th=[46924] 00:16:30.585 bw ( KiB/s): min=11169, max=11169, per=21.86%, avg=11169.00, stdev= 0.00, samples=1 00:16:30.585 iops : min= 2792, max= 2792, avg=2792.00, stdev= 0.00, samples=1 00:16:30.585 lat (usec) : 750=0.03% 00:16:30.585 lat (msec) : 10=2.26%, 20=55.53%, 50=42.18% 00:16:30.585 cpu : usr=3.40%, sys=9.50%, ctx=233, majf=0, minf=9 00:16:30.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:30.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.585 issued rwts: total=3072,3440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.585 job2: (groupid=0, jobs=1): err= 0: pid=76726: Sun Nov 17 22:16:26 2024 00:16:30.585 read: IOPS=3291, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1003msec) 00:16:30.585 slat (usec): min=2, max=4772, avg=140.97, stdev=626.29 00:16:30.585 clat (usec): min=489, max=25809, avg=18177.57, stdev=2468.12 00:16:30.585 lat (usec): min=2891, max=25845, avg=18318.53, stdev=2403.97 00:16:30.585 clat percentiles (usec): 00:16:30.585 | 1.00th=[ 6718], 5.00th=[14484], 10.00th=[15401], 20.00th=[17433], 00:16:30.585 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18744], 00:16:30.585 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20317], 95.00th=[21103], 00:16:30.585 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:16:30.586 | 99.99th=[25822] 00:16:30.586 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:16:30.586 slat (usec): min=4, max=9141, avg=141.24, stdev=624.68 00:16:30.586 clat (usec): min=13285, max=26578, avg=18503.99, stdev=2201.88 00:16:30.586 lat (usec): min=13302, max=26594, avg=18645.24, stdev=2177.26 00:16:30.586 clat percentiles (usec): 00:16:30.586 | 1.00th=[14222], 5.00th=[15008], 10.00th=[15533], 20.00th=[16319], 00:16:30.586 | 30.00th=[17433], 40.00th=[18220], 50.00th=[18744], 60.00th=[19006], 00:16:30.586 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20841], 95.00th=[21890], 00:16:30.586 | 99.00th=[25822], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:16:30.586 | 99.99th=[26608] 00:16:30.586 bw ( KiB/s): min=14307, max=14307, per=28.00%, avg=14307.00, stdev= 0.00, samples=1 00:16:30.586 iops : min= 3576, max= 3576, avg=3576.00, stdev= 0.00, samples=1 00:16:30.586 lat (usec) : 500=0.01% 00:16:30.586 lat (msec) : 4=0.07%, 10=0.65%, 20=82.45%, 50=16.80% 00:16:30.586 cpu : usr=3.09%, sys=10.78%, ctx=591, majf=0, minf=13 00:16:30.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:30.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.586 issued rwts: total=3301,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.586 job3: (groupid=0, jobs=1): err= 0: pid=76727: Sun Nov 17 22:16:26 2024 00:16:30.586 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:30.586 slat (usec): min=6, max=9955, avg=332.48, stdev=1430.50 00:16:30.586 clat (usec): min=22415, max=75739, avg=42764.48, stdev=13767.26 00:16:30.586 lat (usec): min=24082, max=75756, avg=43096.96, stdev=13787.73 00:16:30.586 clat percentiles (usec): 00:16:30.586 | 1.00th=[25035], 5.00th=[30278], 10.00th=[30802], 20.00th=[31589], 00:16:30.586 | 30.00th=[32113], 40.00th=[34866], 50.00th=[36439], 60.00th=[40633], 00:16:30.586 | 70.00th=[46400], 80.00th=[57410], 90.00th=[66847], 95.00th=[69731], 00:16:30.586 | 99.00th=[74974], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:16:30.586 | 99.99th=[76022] 00:16:30.586 write: IOPS=1703, BW=6813KiB/s (6977kB/s)(6820KiB/1001msec); 0 zone resets 00:16:30.586 slat (usec): min=14, max=11739, avg=277.18, stdev=1404.63 00:16:30.586 clat (usec): min=669, max=53972, avg=34707.24, stdev=8791.10 00:16:30.586 lat (usec): min=6304, max=54020, avg=34984.42, stdev=8737.08 00:16:30.586 clat percentiles (usec): 00:16:30.586 | 1.00th=[ 6783], 5.00th=[20055], 10.00th=[26346], 20.00th=[27919], 00:16:30.586 | 30.00th=[29754], 40.00th=[31327], 50.00th=[33817], 60.00th=[38536], 00:16:30.586 | 70.00th=[40633], 80.00th=[42730], 90.00th=[44303], 95.00th=[46400], 00:16:30.586 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:16:30.586 | 99.99th=[53740] 00:16:30.586 bw ( KiB/s): min= 8175, max= 8175, per=16.00%, avg=8175.00, stdev= 0.00, samples=1 00:16:30.586 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:16:30.586 lat (usec) : 750=0.03% 00:16:30.586 lat (msec) : 10=0.99%, 20=1.67%, 50=84.39%, 100=12.93% 00:16:30.586 cpu : usr=2.10%, sys=5.40%, ctx=141, majf=0, minf=17 00:16:30.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:16:30.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:30.586 issued rwts: total=1536,1705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:30.586 00:16:30.586 Run status group 0 (all jobs): 00:16:30.586 READ: bw=46.1MiB/s (48.3MB/s), 6138KiB/s-15.3MiB/s (6285kB/s-16.0MB/s), io=46.2MiB (48.5MB), run=1001-1004msec 00:16:30.586 WRITE: bw=49.9MiB/s (52.3MB/s), 6813KiB/s-15.9MiB/s (6977kB/s-16.7MB/s), io=50.1MiB (52.5MB), run=1001-1004msec 00:16:30.586 00:16:30.586 Disk stats (read/write): 00:16:30.586 nvme0n1: ios=3526/3584, merge=0/0, ticks=12884/12100, in_queue=24984, util=88.87% 00:16:30.586 nvme0n2: ios=2524/2560, merge=0/0, ticks=25894/24442, in_queue=50336, util=89.16% 00:16:30.586 nvme0n3: ios=2858/3072, merge=0/0, ticks=12370/12610, in_queue=24980, util=88.85% 00:16:30.586 nvme0n4: ios=1184/1536, merge=0/0, ticks=13308/12652, in_queue=25960, util=89.71% 00:16:30.586 22:16:26 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:30.586 [global] 00:16:30.586 thread=1 00:16:30.586 invalidate=1 00:16:30.586 rw=randwrite 00:16:30.586 time_based=1 00:16:30.586 runtime=1 00:16:30.586 ioengine=libaio 00:16:30.586 direct=1 00:16:30.586 bs=4096 00:16:30.586 iodepth=128 00:16:30.586 norandommap=0 00:16:30.586 numjobs=1 00:16:30.586 00:16:30.586 verify_dump=1 00:16:30.586 verify_backlog=512 00:16:30.586 verify_state_save=0 00:16:30.586 do_verify=1 00:16:30.586 verify=crc32c-intel 00:16:30.586 [job0] 00:16:30.586 filename=/dev/nvme0n1 00:16:30.586 [job1] 00:16:30.586 filename=/dev/nvme0n2 00:16:30.586 [job2] 00:16:30.586 filename=/dev/nvme0n3 00:16:30.586 [job3] 00:16:30.586 filename=/dev/nvme0n4 00:16:30.586 Could not set queue depth (nvme0n1) 00:16:30.586 Could not set queue depth (nvme0n2) 00:16:30.586 Could not set queue depth (nvme0n3) 00:16:30.586 Could not set queue depth (nvme0n4) 00:16:30.586 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:30.586 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:30.586 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:30.586 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:30.586 fio-3.35 00:16:30.586 Starting 4 threads 00:16:31.964 00:16:31.964 job0: (groupid=0, jobs=1): err= 0: pid=76786: Sun Nov 17 22:16:28 2024 00:16:31.964 read: IOPS=3557, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1009msec) 00:16:31.964 slat (usec): min=5, max=14780, avg=134.42, stdev=980.11 00:16:31.964 clat (usec): min=5296, max=32530, avg=17339.06, stdev=3969.36 00:16:31.964 lat (usec): min=5311, max=32546, avg=17473.48, stdev=4037.10 00:16:31.964 clat percentiles (usec): 00:16:31.964 | 1.00th=[ 9372], 5.00th=[12911], 10.00th=[13698], 20.00th=[14615], 00:16:31.964 | 30.00th=[15270], 40.00th=[15926], 50.00th=[16450], 60.00th=[17433], 00:16:31.964 | 70.00th=[18482], 80.00th=[20055], 90.00th=[22414], 95.00th=[25297], 00:16:31.964 | 99.00th=[30802], 99.50th=[31327], 99.90th=[32375], 99.95th=[32637], 00:16:31.964 | 99.99th=[32637] 00:16:31.964 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:16:31.964 slat (usec): min=5, max=14732, avg=118.44, stdev=828.58 00:16:31.964 clat (usec): min=3343, max=32496, avg=16024.54, stdev=3350.66 00:16:31.964 lat (usec): min=3368, max=32508, avg=16142.98, stdev=3457.48 00:16:31.964 clat percentiles (usec): 00:16:31.964 | 1.00th=[ 5014], 5.00th=[ 7635], 10.00th=[11994], 20.00th=[15533], 00:16:31.964 | 30.00th=[15926], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:16:31.964 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[19006], 00:16:31.964 | 99.00th=[23987], 99.50th=[26346], 99.90th=[31589], 99.95th=[32375], 00:16:31.964 | 99.99th=[32375] 00:16:31.964 bw ( KiB/s): min=15416, max=16384, per=28.17%, avg=15900.00, stdev=684.48, samples=2 00:16:31.964 iops : min= 3854, max= 4096, avg=3975.00, stdev=171.12, samples=2 00:16:31.964 lat (msec) : 4=0.23%, 10=4.55%, 20=85.08%, 50=10.14% 00:16:31.964 cpu : usr=4.66%, sys=9.42%, ctx=381, majf=0, minf=2 00:16:31.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:31.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.964 issued rwts: total=3590,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.964 job1: (groupid=0, jobs=1): err= 0: pid=76787: Sun Nov 17 22:16:28 2024 00:16:31.964 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:16:31.964 slat (usec): min=5, max=8875, avg=130.51, stdev=809.75 00:16:31.964 clat (usec): min=9652, max=27708, avg=16440.38, stdev=2022.46 00:16:31.964 lat (usec): min=9667, max=27738, avg=16570.88, stdev=2121.32 00:16:31.964 clat percentiles (usec): 00:16:31.964 | 1.00th=[10814], 5.00th=[12649], 10.00th=[14484], 20.00th=[15401], 00:16:31.964 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16909], 00:16:31.964 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18482], 95.00th=[19792], 00:16:31.964 | 99.00th=[22676], 99.50th=[23462], 99.90th=[26084], 99.95th=[27132], 00:16:31.964 | 99.99th=[27657] 00:16:31.964 write: IOPS=3988, BW=15.6MiB/s (16.3MB/s)(15.6MiB/1003msec); 0 zone resets 00:16:31.964 slat (usec): min=11, max=8021, avg=124.88, stdev=658.69 00:16:31.964 clat (usec): min=2599, max=26088, avg=16947.02, stdev=2492.96 00:16:31.964 lat (usec): min=2624, max=26138, avg=17071.90, stdev=2516.63 00:16:31.964 clat percentiles (usec): 00:16:31.964 | 1.00th=[ 9241], 5.00th=[11076], 10.00th=[15139], 20.00th=[15926], 00:16:31.964 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17433], 60.00th=[17695], 00:16:31.964 | 70.00th=[17957], 80.00th=[18482], 90.00th=[18744], 95.00th=[19006], 00:16:31.964 | 99.00th=[23462], 99.50th=[24511], 99.90th=[25297], 99.95th=[25822], 00:16:31.964 | 99.99th=[26084] 00:16:31.964 bw ( KiB/s): min=14600, max=16384, per=27.44%, avg=15492.00, stdev=1261.48, samples=2 00:16:31.964 iops : min= 3650, max= 4096, avg=3873.00, stdev=315.37, samples=2 00:16:31.964 lat (msec) : 4=0.29%, 10=1.13%, 20=95.04%, 50=3.53% 00:16:31.964 cpu : usr=3.39%, sys=11.58%, ctx=340, majf=0, minf=3 00:16:31.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:31.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.964 issued rwts: total=3584,4000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.964 job2: (groupid=0, jobs=1): err= 0: pid=76788: Sun Nov 17 22:16:28 2024 00:16:31.964 read: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1003msec) 00:16:31.964 slat (usec): min=6, max=5306, avg=160.37, stdev=771.97 00:16:31.964 clat (usec): min=2668, max=25370, avg=20813.42, stdev=2463.69 00:16:31.964 lat (usec): min=2682, max=26163, avg=20973.79, stdev=2365.18 00:16:31.964 clat percentiles (usec): 00:16:31.964 | 1.00th=[ 8291], 5.00th=[17171], 10.00th=[19006], 20.00th=[20579], 00:16:31.964 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:16:31.964 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22414], 95.00th=[22676], 00:16:31.964 | 99.00th=[24511], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:16:31.964 | 99.99th=[25297] 00:16:31.964 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:16:31.964 slat (usec): min=14, max=5892, avg=168.26, stdev=742.48 00:16:31.964 clat (usec): min=16253, max=27557, avg=21917.46, stdev=2446.98 00:16:31.964 lat (usec): min=16276, max=27578, avg=22085.72, stdev=2426.53 00:16:31.964 clat percentiles (usec): 00:16:31.964 | 1.00th=[16909], 5.00th=[17957], 10.00th=[18482], 20.00th=[19268], 00:16:31.964 | 30.00th=[20055], 40.00th=[21627], 50.00th=[22676], 60.00th=[23462], 00:16:31.964 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25297], 00:16:31.964 | 99.00th=[26346], 99.50th=[26346], 99.90th=[27657], 99.95th=[27657], 00:16:31.964 | 99.99th=[27657] 00:16:31.964 bw ( KiB/s): min=12288, max=12288, per=21.77%, avg=12288.00, stdev= 0.00, samples=2 00:16:31.964 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:31.964 lat (msec) : 4=0.27%, 10=0.54%, 20=22.51%, 50=76.68% 00:16:31.964 cpu : usr=2.69%, sys=10.38%, ctx=409, majf=0, minf=5 00:16:31.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:31.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.964 issued rwts: total=2832,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.964 job3: (groupid=0, jobs=1): err= 0: pid=76789: Sun Nov 17 22:16:28 2024 00:16:31.964 read: IOPS=2817, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1002msec) 00:16:31.964 slat (usec): min=10, max=6891, avg=162.93, stdev=869.10 00:16:31.964 clat (usec): min=532, max=27739, avg=20612.59, stdev=2616.35 00:16:31.964 lat (usec): min=6669, max=29041, avg=20775.51, stdev=2642.55 00:16:31.964 clat percentiles (usec): 00:16:31.964 | 1.00th=[ 7308], 5.00th=[16057], 10.00th=[18482], 20.00th=[19792], 00:16:31.964 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21103], 60.00th=[21365], 00:16:31.964 | 70.00th=[21627], 80.00th=[21890], 90.00th=[23200], 95.00th=[23462], 00:16:31.964 | 99.00th=[26084], 99.50th=[26870], 99.90th=[27395], 99.95th=[27657], 00:16:31.964 | 99.99th=[27657] 00:16:31.964 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:16:31.964 slat (usec): min=13, max=7538, avg=167.09, stdev=779.62 00:16:31.964 clat (usec): min=14631, max=28754, avg=22115.81, stdev=2657.04 00:16:31.964 lat (usec): min=14655, max=28790, avg=22282.90, stdev=2602.37 00:16:31.964 clat percentiles (usec): 00:16:31.964 | 1.00th=[15533], 5.00th=[16319], 10.00th=[16909], 20.00th=[20841], 00:16:31.964 | 30.00th=[21890], 40.00th=[22414], 50.00th=[22938], 60.00th=[23200], 00:16:31.964 | 70.00th=[23462], 80.00th=[23725], 90.00th=[24249], 95.00th=[25560], 00:16:31.964 | 99.00th=[27132], 99.50th=[27919], 99.90th=[28705], 99.95th=[28705], 00:16:31.964 | 99.99th=[28705] 00:16:31.964 bw ( KiB/s): min=12288, max=12288, per=21.77%, avg=12288.00, stdev= 0.00, samples=2 00:16:31.964 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:31.964 lat (usec) : 750=0.02% 00:16:31.964 lat (msec) : 10=0.71%, 20=21.24%, 50=78.03% 00:16:31.964 cpu : usr=2.40%, sys=10.09%, ctx=358, majf=0, minf=5 00:16:31.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:31.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.964 issued rwts: total=2823,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.964 00:16:31.964 Run status group 0 (all jobs): 00:16:31.964 READ: bw=49.7MiB/s (52.1MB/s), 11.0MiB/s-14.0MiB/s (11.5MB/s-14.6MB/s), io=50.1MiB (52.5MB), run=1002-1009msec 00:16:31.964 WRITE: bw=55.1MiB/s (57.8MB/s), 12.0MiB/s-15.9MiB/s (12.5MB/s-16.6MB/s), io=55.6MiB (58.3MB), run=1002-1009msec 00:16:31.964 00:16:31.964 Disk stats (read/write): 00:16:31.964 nvme0n1: ios=3122/3407, merge=0/0, ticks=49925/52063, in_queue=101988, util=87.78% 00:16:31.964 nvme0n2: ios=3121/3322, merge=0/0, ticks=23858/25890, in_queue=49748, util=88.86% 00:16:31.964 nvme0n3: ios=2466/2560, merge=0/0, ticks=12544/12896, in_queue=25440, util=88.80% 00:16:31.964 nvme0n4: ios=2441/2560, merge=0/0, ticks=16094/17362, in_queue=33456, util=89.55% 00:16:31.964 22:16:28 -- target/fio.sh@55 -- # sync 00:16:31.964 22:16:28 -- target/fio.sh@59 -- # fio_pid=76803 00:16:31.964 22:16:28 -- target/fio.sh@61 -- # sleep 3 00:16:31.964 22:16:28 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:31.964 [global] 00:16:31.964 thread=1 00:16:31.964 invalidate=1 00:16:31.964 rw=read 00:16:31.964 time_based=1 00:16:31.964 runtime=10 00:16:31.964 ioengine=libaio 00:16:31.964 direct=1 00:16:31.964 bs=4096 00:16:31.964 iodepth=1 00:16:31.964 norandommap=1 00:16:31.964 numjobs=1 00:16:31.964 00:16:31.964 [job0] 00:16:31.964 filename=/dev/nvme0n1 00:16:31.964 [job1] 00:16:31.964 filename=/dev/nvme0n2 00:16:31.964 [job2] 00:16:31.964 filename=/dev/nvme0n3 00:16:31.964 [job3] 00:16:31.964 filename=/dev/nvme0n4 00:16:31.964 Could not set queue depth (nvme0n1) 00:16:31.964 Could not set queue depth (nvme0n2) 00:16:31.964 Could not set queue depth (nvme0n3) 00:16:31.964 Could not set queue depth (nvme0n4) 00:16:31.964 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.964 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.964 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.964 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.964 fio-3.35 00:16:31.964 Starting 4 threads 00:16:35.251 22:16:31 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:35.251 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=26058752, buflen=4096 00:16:35.251 fio: pid=76846, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:35.251 22:16:31 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:35.251 fio: pid=76845, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:35.251 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43687936, buflen=4096 00:16:35.251 22:16:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:35.251 22:16:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:35.515 fio: pid=76843, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:35.515 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=32026624, buflen=4096 00:16:35.515 22:16:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:35.515 22:16:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:35.774 fio: pid=76844, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:35.774 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3510272, buflen=4096 00:16:35.774 00:16:35.774 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76843: Sun Nov 17 22:16:32 2024 00:16:35.774 read: IOPS=2292, BW=9169KiB/s (9389kB/s)(30.5MiB/3411msec) 00:16:35.774 slat (usec): min=13, max=9108, avg=23.87, stdev=190.90 00:16:35.774 clat (usec): min=104, max=45324, avg=410.08, stdev=531.05 00:16:35.774 lat (usec): min=138, max=45347, avg=433.96, stdev=563.61 00:16:35.774 clat percentiles (usec): 00:16:35.774 | 1.00th=[ 131], 5.00th=[ 141], 10.00th=[ 155], 20.00th=[ 258], 00:16:35.774 | 30.00th=[ 347], 40.00th=[ 404], 50.00th=[ 465], 60.00th=[ 490], 00:16:35.774 | 70.00th=[ 502], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 553], 00:16:35.774 | 99.00th=[ 594], 99.50th=[ 627], 99.90th=[ 1614], 99.95th=[ 2573], 00:16:35.774 | 99.99th=[45351] 00:16:35.774 bw ( KiB/s): min= 7752, max= 9117, per=17.73%, avg=8123.50, stdev=555.14, samples=6 00:16:35.774 iops : min= 1938, max= 2279, avg=2030.83, stdev=138.69, samples=6 00:16:35.774 lat (usec) : 250=19.17%, 500=49.00%, 750=31.68%, 1000=0.04% 00:16:35.774 lat (msec) : 2=0.04%, 4=0.05%, 50=0.01% 00:16:35.774 cpu : usr=0.91%, sys=4.13%, ctx=7833, majf=0, minf=1 00:16:35.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.774 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.774 issued rwts: total=7820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.774 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76844: Sun Nov 17 22:16:32 2024 00:16:35.774 read: IOPS=4692, BW=18.3MiB/s (19.2MB/s)(67.3MiB/3674msec) 00:16:35.774 slat (usec): min=12, max=16931, avg=19.67, stdev=208.17 00:16:35.774 clat (usec): min=5, max=141893, avg=192.14, stdev=1125.88 00:16:35.774 lat (usec): min=137, max=141906, avg=211.82, stdev=1145.05 00:16:35.774 clat percentiles (usec): 00:16:35.774 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:16:35.774 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:16:35.774 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 215], 95.00th=[ 241], 00:16:35.774 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 766], 99.95th=[ 1696], 00:16:35.774 | 99.99th=[41157] 00:16:35.774 bw ( KiB/s): min=10769, max=21328, per=41.44%, avg=18990.43, stdev=3822.14, samples=7 00:16:35.774 iops : min= 2692, max= 5332, avg=4747.57, stdev=955.63, samples=7 00:16:35.774 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 250=95.85%, 500=4.00% 00:16:35.774 lat (usec) : 750=0.02%, 1000=0.02% 00:16:35.774 lat (msec) : 2=0.05%, 4=0.02%, 10=0.01%, 50=0.01%, 250=0.01% 00:16:35.774 cpu : usr=1.09%, sys=5.91%, ctx=17274, majf=0, minf=2 00:16:35.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.774 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.774 issued rwts: total=17242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.774 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76845: Sun Nov 17 22:16:32 2024 00:16:35.774 read: IOPS=3369, BW=13.2MiB/s (13.8MB/s)(41.7MiB/3166msec) 00:16:35.774 slat (usec): min=7, max=15758, avg=19.78, stdev=168.04 00:16:35.774 clat (usec): min=144, max=40871, avg=275.51, stdev=398.18 00:16:35.774 lat (usec): min=158, max=40892, avg=295.30, stdev=432.35 00:16:35.774 clat percentiles (usec): 00:16:35.774 | 1.00th=[ 202], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 235], 00:16:35.774 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:16:35.774 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 351], 95.00th=[ 379], 00:16:35.774 | 99.00th=[ 441], 99.50th=[ 465], 99.90th=[ 660], 99.95th=[ 865], 00:16:35.774 | 99.99th=[ 2769] 00:16:35.774 bw ( KiB/s): min=11281, max=14664, per=30.02%, avg=13756.17, stdev=1248.90, samples=6 00:16:35.774 iops : min= 2820, max= 3666, avg=3439.00, stdev=312.32, samples=6 00:16:35.774 lat (usec) : 250=41.75%, 500=58.04%, 750=0.13%, 1000=0.04% 00:16:35.774 lat (msec) : 2=0.01%, 4=0.02%, 50=0.01% 00:16:35.774 cpu : usr=1.04%, sys=4.58%, ctx=10673, majf=0, minf=2 00:16:35.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.774 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.774 issued rwts: total=10667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.774 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76846: Sun Nov 17 22:16:32 2024 00:16:35.774 read: IOPS=2170, BW=8682KiB/s (8891kB/s)(24.9MiB/2931msec) 00:16:35.774 slat (nsec): min=13985, max=85796, avg=19078.51, stdev=4922.15 00:16:35.774 clat (usec): min=160, max=67985, avg=439.18, stdev=858.11 00:16:35.774 lat (usec): min=178, max=68006, avg=458.25, stdev=858.10 00:16:35.774 clat percentiles (usec): 00:16:35.774 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 204], 20.00th=[ 237], 00:16:35.774 | 30.00th=[ 404], 40.00th=[ 469], 50.00th=[ 486], 60.00th=[ 498], 00:16:35.774 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 562], 00:16:35.775 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 717], 99.95th=[ 1860], 00:16:35.775 | 99.99th=[67634] 00:16:35.775 bw ( KiB/s): min= 7744, max=13213, per=19.37%, avg=8876.20, stdev=2424.61, samples=5 00:16:35.775 iops : min= 1936, max= 3303, avg=2219.00, stdev=606.04, samples=5 00:16:35.775 lat (usec) : 250=21.52%, 500=39.67%, 750=38.72%, 1000=0.02% 00:16:35.775 lat (msec) : 2=0.02%, 4=0.03%, 100=0.02% 00:16:35.775 cpu : usr=0.99%, sys=3.75%, ctx=6363, majf=0, minf=2 00:16:35.775 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.775 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.775 issued rwts: total=6363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.775 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.775 00:16:35.775 Run status group 0 (all jobs): 00:16:35.775 READ: bw=44.7MiB/s (46.9MB/s), 8682KiB/s-18.3MiB/s (8891kB/s-19.2MB/s), io=164MiB (172MB), run=2931-3674msec 00:16:35.775 00:16:35.775 Disk stats (read/write): 00:16:35.775 nvme0n1: ios=7588/0, merge=0/0, ticks=3173/0, in_queue=3173, util=95.39% 00:16:35.775 nvme0n2: ios=16920/0, merge=0/0, ticks=3353/0, in_queue=3353, util=95.02% 00:16:35.775 nvme0n3: ios=10526/0, merge=0/0, ticks=2946/0, in_queue=2946, util=96.21% 00:16:35.775 nvme0n4: ios=6247/0, merge=0/0, ticks=2701/0, in_queue=2701, util=96.79% 00:16:35.775 22:16:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:35.775 22:16:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:36.034 22:16:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:36.034 22:16:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:36.293 22:16:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:36.293 22:16:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:36.552 22:16:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:36.552 22:16:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:36.812 22:16:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:36.812 22:16:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:37.380 22:16:33 -- target/fio.sh@69 -- # fio_status=0 00:16:37.380 22:16:33 -- target/fio.sh@70 -- # wait 76803 00:16:37.380 22:16:33 -- target/fio.sh@70 -- # fio_status=4 00:16:37.380 22:16:33 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.380 22:16:33 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:37.380 22:16:33 -- common/autotest_common.sh@1208 -- # local i=0 00:16:37.380 22:16:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:37.380 22:16:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.380 22:16:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:37.380 22:16:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.380 nvmf hotplug test: fio failed as expected 00:16:37.380 22:16:33 -- common/autotest_common.sh@1220 -- # return 0 00:16:37.380 22:16:33 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:37.380 22:16:33 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:37.380 22:16:33 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.640 22:16:34 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:37.640 22:16:34 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:37.640 22:16:34 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:37.640 22:16:34 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:37.640 22:16:34 -- target/fio.sh@91 -- # nvmftestfini 00:16:37.640 22:16:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:37.640 22:16:34 -- nvmf/common.sh@116 -- # sync 00:16:37.640 22:16:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:37.640 22:16:34 -- nvmf/common.sh@119 -- # set +e 00:16:37.640 22:16:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:37.640 22:16:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:37.640 rmmod nvme_tcp 00:16:37.640 rmmod nvme_fabrics 00:16:37.640 rmmod nvme_keyring 00:16:37.640 22:16:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:37.640 22:16:34 -- nvmf/common.sh@123 -- # set -e 00:16:37.640 22:16:34 -- nvmf/common.sh@124 -- # return 0 00:16:37.640 22:16:34 -- nvmf/common.sh@477 -- # '[' -n 76316 ']' 00:16:37.640 22:16:34 -- nvmf/common.sh@478 -- # killprocess 76316 00:16:37.640 22:16:34 -- common/autotest_common.sh@936 -- # '[' -z 76316 ']' 00:16:37.640 22:16:34 -- common/autotest_common.sh@940 -- # kill -0 76316 00:16:37.640 22:16:34 -- common/autotest_common.sh@941 -- # uname 00:16:37.640 22:16:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.640 22:16:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76316 00:16:37.640 killing process with pid 76316 00:16:37.640 22:16:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:37.640 22:16:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:37.640 22:16:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76316' 00:16:37.640 22:16:34 -- common/autotest_common.sh@955 -- # kill 76316 00:16:37.640 22:16:34 -- common/autotest_common.sh@960 -- # wait 76316 00:16:38.208 22:16:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:38.208 22:16:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:38.208 22:16:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:38.208 22:16:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.208 22:16:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:38.208 22:16:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.208 22:16:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.208 22:16:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.208 22:16:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:38.208 00:16:38.208 real 0m19.535s 00:16:38.208 user 1m15.383s 00:16:38.208 sys 0m7.650s 00:16:38.208 22:16:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:38.208 22:16:34 -- common/autotest_common.sh@10 -- # set +x 00:16:38.208 ************************************ 00:16:38.208 END TEST nvmf_fio_target 00:16:38.208 ************************************ 00:16:38.208 22:16:34 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:38.208 22:16:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:38.208 22:16:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.208 22:16:34 -- common/autotest_common.sh@10 -- # set +x 00:16:38.208 ************************************ 00:16:38.208 START TEST nvmf_bdevio 00:16:38.208 ************************************ 00:16:38.208 22:16:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:38.208 * Looking for test storage... 00:16:38.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:38.208 22:16:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:38.208 22:16:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:38.208 22:16:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:38.208 22:16:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:38.208 22:16:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:38.208 22:16:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:38.208 22:16:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:38.208 22:16:34 -- scripts/common.sh@335 -- # IFS=.-: 00:16:38.208 22:16:34 -- scripts/common.sh@335 -- # read -ra ver1 00:16:38.208 22:16:34 -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.208 22:16:34 -- scripts/common.sh@336 -- # read -ra ver2 00:16:38.208 22:16:34 -- scripts/common.sh@337 -- # local 'op=<' 00:16:38.208 22:16:34 -- scripts/common.sh@339 -- # ver1_l=2 00:16:38.208 22:16:34 -- scripts/common.sh@340 -- # ver2_l=1 00:16:38.208 22:16:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:38.208 22:16:34 -- scripts/common.sh@343 -- # case "$op" in 00:16:38.209 22:16:34 -- scripts/common.sh@344 -- # : 1 00:16:38.209 22:16:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:38.209 22:16:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.209 22:16:34 -- scripts/common.sh@364 -- # decimal 1 00:16:38.209 22:16:34 -- scripts/common.sh@352 -- # local d=1 00:16:38.209 22:16:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.209 22:16:34 -- scripts/common.sh@354 -- # echo 1 00:16:38.209 22:16:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:38.209 22:16:34 -- scripts/common.sh@365 -- # decimal 2 00:16:38.209 22:16:34 -- scripts/common.sh@352 -- # local d=2 00:16:38.209 22:16:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.209 22:16:34 -- scripts/common.sh@354 -- # echo 2 00:16:38.209 22:16:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:38.209 22:16:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:38.209 22:16:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:38.209 22:16:34 -- scripts/common.sh@367 -- # return 0 00:16:38.209 22:16:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.209 22:16:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:38.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.209 --rc genhtml_branch_coverage=1 00:16:38.209 --rc genhtml_function_coverage=1 00:16:38.209 --rc genhtml_legend=1 00:16:38.209 --rc geninfo_all_blocks=1 00:16:38.209 --rc geninfo_unexecuted_blocks=1 00:16:38.209 00:16:38.209 ' 00:16:38.209 22:16:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:38.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.209 --rc genhtml_branch_coverage=1 00:16:38.209 --rc genhtml_function_coverage=1 00:16:38.209 --rc genhtml_legend=1 00:16:38.209 --rc geninfo_all_blocks=1 00:16:38.209 --rc geninfo_unexecuted_blocks=1 00:16:38.209 00:16:38.209 ' 00:16:38.209 22:16:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:38.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.209 --rc genhtml_branch_coverage=1 00:16:38.209 --rc genhtml_function_coverage=1 00:16:38.209 --rc genhtml_legend=1 00:16:38.209 --rc geninfo_all_blocks=1 00:16:38.209 --rc geninfo_unexecuted_blocks=1 00:16:38.209 00:16:38.209 ' 00:16:38.209 22:16:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:38.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.209 --rc genhtml_branch_coverage=1 00:16:38.209 --rc genhtml_function_coverage=1 00:16:38.209 --rc genhtml_legend=1 00:16:38.209 --rc geninfo_all_blocks=1 00:16:38.209 --rc geninfo_unexecuted_blocks=1 00:16:38.209 00:16:38.209 ' 00:16:38.209 22:16:34 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.209 22:16:34 -- nvmf/common.sh@7 -- # uname -s 00:16:38.468 22:16:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.468 22:16:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.468 22:16:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.468 22:16:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.468 22:16:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.468 22:16:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.468 22:16:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.468 22:16:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.468 22:16:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.468 22:16:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.468 22:16:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:16:38.468 22:16:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:16:38.468 22:16:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.468 22:16:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.468 22:16:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.468 22:16:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.468 22:16:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.468 22:16:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.468 22:16:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.468 22:16:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.469 22:16:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.469 22:16:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.469 22:16:34 -- paths/export.sh@5 -- # export PATH 00:16:38.469 22:16:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.469 22:16:34 -- nvmf/common.sh@46 -- # : 0 00:16:38.469 22:16:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:38.469 22:16:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:38.469 22:16:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:38.469 22:16:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.469 22:16:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.469 22:16:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:38.469 22:16:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:38.469 22:16:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:38.469 22:16:34 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.469 22:16:34 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.469 22:16:34 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:38.469 22:16:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:38.469 22:16:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.469 22:16:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:38.469 22:16:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:38.469 22:16:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:38.469 22:16:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.469 22:16:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.469 22:16:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.469 22:16:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:38.469 22:16:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:38.469 22:16:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:38.469 22:16:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:38.469 22:16:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:38.469 22:16:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:38.469 22:16:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.469 22:16:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.469 22:16:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:38.469 22:16:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:38.469 22:16:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.469 22:16:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.469 22:16:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.469 22:16:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.469 22:16:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.469 22:16:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.469 22:16:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.469 22:16:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.469 22:16:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:38.469 22:16:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:38.469 Cannot find device "nvmf_tgt_br" 00:16:38.469 22:16:34 -- nvmf/common.sh@154 -- # true 00:16:38.469 22:16:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.469 Cannot find device "nvmf_tgt_br2" 00:16:38.469 22:16:34 -- nvmf/common.sh@155 -- # true 00:16:38.469 22:16:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:38.469 22:16:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:38.469 Cannot find device "nvmf_tgt_br" 00:16:38.469 22:16:34 -- nvmf/common.sh@157 -- # true 00:16:38.469 22:16:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:38.469 Cannot find device "nvmf_tgt_br2" 00:16:38.469 22:16:34 -- nvmf/common.sh@158 -- # true 00:16:38.469 22:16:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:38.469 22:16:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:38.469 22:16:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.469 22:16:34 -- nvmf/common.sh@161 -- # true 00:16:38.469 22:16:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.469 22:16:34 -- nvmf/common.sh@162 -- # true 00:16:38.469 22:16:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.469 22:16:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.469 22:16:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.469 22:16:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:38.469 22:16:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.469 22:16:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.469 22:16:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.469 22:16:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.469 22:16:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:38.469 22:16:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:38.469 22:16:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:38.469 22:16:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:38.469 22:16:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:38.469 22:16:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.469 22:16:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.469 22:16:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.469 22:16:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:38.469 22:16:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:38.728 22:16:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.728 22:16:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.728 22:16:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.728 22:16:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.728 22:16:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.728 22:16:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:38.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:38.728 00:16:38.728 --- 10.0.0.2 ping statistics --- 00:16:38.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.728 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:38.728 22:16:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:38.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:38.728 00:16:38.728 --- 10.0.0.3 ping statistics --- 00:16:38.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.728 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:38.728 22:16:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:38.728 00:16:38.728 --- 10.0.0.1 ping statistics --- 00:16:38.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.728 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:38.728 22:16:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.728 22:16:35 -- nvmf/common.sh@421 -- # return 0 00:16:38.728 22:16:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:38.728 22:16:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.728 22:16:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:38.728 22:16:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:38.728 22:16:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.728 22:16:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:38.728 22:16:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:38.728 22:16:35 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:38.728 22:16:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:38.728 22:16:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.728 22:16:35 -- common/autotest_common.sh@10 -- # set +x 00:16:38.728 22:16:35 -- nvmf/common.sh@469 -- # nvmfpid=77184 00:16:38.728 22:16:35 -- nvmf/common.sh@470 -- # waitforlisten 77184 00:16:38.728 22:16:35 -- common/autotest_common.sh@829 -- # '[' -z 77184 ']' 00:16:38.728 22:16:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.728 22:16:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.728 22:16:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.728 22:16:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.728 22:16:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:38.728 22:16:35 -- common/autotest_common.sh@10 -- # set +x 00:16:38.728 [2024-11-17 22:16:35.222627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:38.728 [2024-11-17 22:16:35.222716] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.987 [2024-11-17 22:16:35.363636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.987 [2024-11-17 22:16:35.433413] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:38.987 [2024-11-17 22:16:35.433554] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.987 [2024-11-17 22:16:35.433567] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.987 [2024-11-17 22:16:35.433575] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.987 [2024-11-17 22:16:35.433765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:38.987 [2024-11-17 22:16:35.434278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:38.987 [2024-11-17 22:16:35.434423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:38.987 [2024-11-17 22:16:35.434511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.554 22:16:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.554 22:16:36 -- common/autotest_common.sh@862 -- # return 0 00:16:39.555 22:16:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:39.555 22:16:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:39.555 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:16:39.813 22:16:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.813 22:16:36 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:39.813 22:16:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.813 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:16:39.813 [2024-11-17 22:16:36.190415] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.813 22:16:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.813 22:16:36 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:39.813 22:16:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.813 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:16:39.813 Malloc0 00:16:39.813 22:16:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.813 22:16:36 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:39.813 22:16:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.813 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:16:39.813 22:16:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.814 22:16:36 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.814 22:16:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.814 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:16:39.814 22:16:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.814 22:16:36 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.814 22:16:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.814 22:16:36 -- common/autotest_common.sh@10 -- # set +x 00:16:39.814 [2024-11-17 22:16:36.250840] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.814 22:16:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.814 22:16:36 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:39.814 22:16:36 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:39.814 22:16:36 -- nvmf/common.sh@520 -- # config=() 00:16:39.814 22:16:36 -- nvmf/common.sh@520 -- # local subsystem config 00:16:39.814 22:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:39.814 22:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:39.814 { 00:16:39.814 "params": { 00:16:39.814 "name": "Nvme$subsystem", 00:16:39.814 "trtype": "$TEST_TRANSPORT", 00:16:39.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:39.814 "adrfam": "ipv4", 00:16:39.814 "trsvcid": "$NVMF_PORT", 00:16:39.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:39.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:39.814 "hdgst": ${hdgst:-false}, 00:16:39.814 "ddgst": ${ddgst:-false} 00:16:39.814 }, 00:16:39.814 "method": "bdev_nvme_attach_controller" 00:16:39.814 } 00:16:39.814 EOF 00:16:39.814 )") 00:16:39.814 22:16:36 -- nvmf/common.sh@542 -- # cat 00:16:39.814 22:16:36 -- nvmf/common.sh@544 -- # jq . 00:16:39.814 22:16:36 -- nvmf/common.sh@545 -- # IFS=, 00:16:39.814 22:16:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:39.814 "params": { 00:16:39.814 "name": "Nvme1", 00:16:39.814 "trtype": "tcp", 00:16:39.814 "traddr": "10.0.0.2", 00:16:39.814 "adrfam": "ipv4", 00:16:39.814 "trsvcid": "4420", 00:16:39.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:39.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:39.814 "hdgst": false, 00:16:39.814 "ddgst": false 00:16:39.814 }, 00:16:39.814 "method": "bdev_nvme_attach_controller" 00:16:39.814 }' 00:16:39.814 [2024-11-17 22:16:36.312019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:39.814 [2024-11-17 22:16:36.312118] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77238 ] 00:16:40.073 [2024-11-17 22:16:36.456127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.073 [2024-11-17 22:16:36.547864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.073 [2024-11-17 22:16:36.548010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.073 [2024-11-17 22:16:36.548018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.332 [2024-11-17 22:16:36.726945] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:40.332 [2024-11-17 22:16:36.727001] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:40.332 I/O targets: 00:16:40.332 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:40.332 00:16:40.332 00:16:40.332 CUnit - A unit testing framework for C - Version 2.1-3 00:16:40.332 http://cunit.sourceforge.net/ 00:16:40.332 00:16:40.332 00:16:40.332 Suite: bdevio tests on: Nvme1n1 00:16:40.332 Test: blockdev write read block ...passed 00:16:40.332 Test: blockdev write zeroes read block ...passed 00:16:40.332 Test: blockdev write zeroes read no split ...passed 00:16:40.332 Test: blockdev write zeroes read split ...passed 00:16:40.332 Test: blockdev write zeroes read split partial ...passed 00:16:40.332 Test: blockdev reset ...[2024-11-17 22:16:36.846665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:40.332 [2024-11-17 22:16:36.846782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f4910 (9): Bad file descriptor 00:16:40.332 [2024-11-17 22:16:36.867144] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:40.332 passed 00:16:40.332 Test: blockdev write read 8 blocks ...passed 00:16:40.332 Test: blockdev write read size > 128k ...passed 00:16:40.332 Test: blockdev write read invalid size ...passed 00:16:40.332 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:40.332 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:40.332 Test: blockdev write read max offset ...passed 00:16:40.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.591 Test: blockdev writev readv 8 blocks ...passed 00:16:40.591 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.591 Test: blockdev writev readv block ...passed 00:16:40.591 Test: blockdev writev readv size > 128k ...passed 00:16:40.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.591 Test: blockdev comparev and writev ...[2024-11-17 22:16:37.041404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.591 [2024-11-17 22:16:37.041455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.041490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.591 [2024-11-17 22:16:37.041504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.042130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.591 [2024-11-17 22:16:37.042160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.042178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.591 [2024-11-17 22:16:37.042188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.042655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.591 [2024-11-17 22:16:37.042682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.042699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.591 [2024-11-17 22:16:37.042709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.043287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.591 [2024-11-17 22:16:37.043332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.043348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.591 [2024-11-17 22:16:37.043358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:40.591 passed 00:16:40.591 Test: blockdev nvme passthru rw ...passed 00:16:40.591 Test: blockdev nvme passthru vendor specific ...[2024-11-17 22:16:37.125062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.591 [2024-11-17 22:16:37.125092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.125532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.591 [2024-11-17 22:16:37.125559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.125693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.591 [2024-11-17 22:16:37.125707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:40.591 [2024-11-17 22:16:37.126138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.591 [2024-11-17 22:16:37.126162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:40.591 passed 00:16:40.591 Test: blockdev nvme admin passthru ...passed 00:16:40.591 Test: blockdev copy ...passed 00:16:40.591 00:16:40.591 Run Summary: Type Total Ran Passed Failed Inactive 00:16:40.591 suites 1 1 n/a 0 0 00:16:40.591 tests 23 23 23 0 0 00:16:40.591 asserts 152 152 152 0 n/a 00:16:40.591 00:16:40.591 Elapsed time = 0.905 seconds 00:16:40.850 22:16:37 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.850 22:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.850 22:16:37 -- common/autotest_common.sh@10 -- # set +x 00:16:40.850 22:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.850 22:16:37 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:40.850 22:16:37 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:40.850 22:16:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:40.850 22:16:37 -- nvmf/common.sh@116 -- # sync 00:16:40.850 22:16:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:40.850 22:16:37 -- nvmf/common.sh@119 -- # set +e 00:16:40.850 22:16:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:40.850 22:16:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:40.850 rmmod nvme_tcp 00:16:40.850 rmmod nvme_fabrics 00:16:41.109 rmmod nvme_keyring 00:16:41.109 22:16:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:41.109 22:16:37 -- nvmf/common.sh@123 -- # set -e 00:16:41.109 22:16:37 -- nvmf/common.sh@124 -- # return 0 00:16:41.109 22:16:37 -- nvmf/common.sh@477 -- # '[' -n 77184 ']' 00:16:41.109 22:16:37 -- nvmf/common.sh@478 -- # killprocess 77184 00:16:41.109 22:16:37 -- common/autotest_common.sh@936 -- # '[' -z 77184 ']' 00:16:41.109 22:16:37 -- common/autotest_common.sh@940 -- # kill -0 77184 00:16:41.109 22:16:37 -- common/autotest_common.sh@941 -- # uname 00:16:41.109 22:16:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:41.109 22:16:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77184 00:16:41.109 22:16:37 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:41.109 22:16:37 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:41.109 killing process with pid 77184 00:16:41.109 22:16:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77184' 00:16:41.109 22:16:37 -- common/autotest_common.sh@955 -- # kill 77184 00:16:41.109 22:16:37 -- common/autotest_common.sh@960 -- # wait 77184 00:16:41.367 22:16:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:41.367 22:16:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:41.367 22:16:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:41.367 22:16:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.367 22:16:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:41.367 22:16:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.367 22:16:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.367 22:16:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.367 22:16:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:41.367 00:16:41.367 real 0m3.160s 00:16:41.367 user 0m11.368s 00:16:41.367 sys 0m0.781s 00:16:41.367 22:16:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:41.367 22:16:37 -- common/autotest_common.sh@10 -- # set +x 00:16:41.367 ************************************ 00:16:41.367 END TEST nvmf_bdevio 00:16:41.367 ************************************ 00:16:41.367 22:16:37 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:41.367 22:16:37 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:41.367 22:16:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:41.367 22:16:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.367 22:16:37 -- common/autotest_common.sh@10 -- # set +x 00:16:41.367 ************************************ 00:16:41.367 START TEST nvmf_bdevio_no_huge 00:16:41.367 ************************************ 00:16:41.367 22:16:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:41.367 * Looking for test storage... 00:16:41.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:41.368 22:16:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:41.368 22:16:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:41.368 22:16:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:41.627 22:16:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:41.627 22:16:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:41.627 22:16:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:41.627 22:16:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:41.627 22:16:38 -- scripts/common.sh@335 -- # IFS=.-: 00:16:41.627 22:16:38 -- scripts/common.sh@335 -- # read -ra ver1 00:16:41.627 22:16:38 -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.627 22:16:38 -- scripts/common.sh@336 -- # read -ra ver2 00:16:41.627 22:16:38 -- scripts/common.sh@337 -- # local 'op=<' 00:16:41.627 22:16:38 -- scripts/common.sh@339 -- # ver1_l=2 00:16:41.627 22:16:38 -- scripts/common.sh@340 -- # ver2_l=1 00:16:41.627 22:16:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:41.627 22:16:38 -- scripts/common.sh@343 -- # case "$op" in 00:16:41.627 22:16:38 -- scripts/common.sh@344 -- # : 1 00:16:41.627 22:16:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:41.627 22:16:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.627 22:16:38 -- scripts/common.sh@364 -- # decimal 1 00:16:41.627 22:16:38 -- scripts/common.sh@352 -- # local d=1 00:16:41.627 22:16:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.627 22:16:38 -- scripts/common.sh@354 -- # echo 1 00:16:41.627 22:16:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:41.627 22:16:38 -- scripts/common.sh@365 -- # decimal 2 00:16:41.627 22:16:38 -- scripts/common.sh@352 -- # local d=2 00:16:41.627 22:16:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.627 22:16:38 -- scripts/common.sh@354 -- # echo 2 00:16:41.627 22:16:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:41.627 22:16:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:41.627 22:16:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:41.627 22:16:38 -- scripts/common.sh@367 -- # return 0 00:16:41.627 22:16:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.627 22:16:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.627 --rc genhtml_branch_coverage=1 00:16:41.627 --rc genhtml_function_coverage=1 00:16:41.627 --rc genhtml_legend=1 00:16:41.627 --rc geninfo_all_blocks=1 00:16:41.627 --rc geninfo_unexecuted_blocks=1 00:16:41.627 00:16:41.627 ' 00:16:41.627 22:16:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.627 --rc genhtml_branch_coverage=1 00:16:41.627 --rc genhtml_function_coverage=1 00:16:41.627 --rc genhtml_legend=1 00:16:41.627 --rc geninfo_all_blocks=1 00:16:41.627 --rc geninfo_unexecuted_blocks=1 00:16:41.627 00:16:41.627 ' 00:16:41.627 22:16:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.627 --rc genhtml_branch_coverage=1 00:16:41.627 --rc genhtml_function_coverage=1 00:16:41.627 --rc genhtml_legend=1 00:16:41.627 --rc geninfo_all_blocks=1 00:16:41.627 --rc geninfo_unexecuted_blocks=1 00:16:41.627 00:16:41.627 ' 00:16:41.627 22:16:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.627 --rc genhtml_branch_coverage=1 00:16:41.627 --rc genhtml_function_coverage=1 00:16:41.627 --rc genhtml_legend=1 00:16:41.627 --rc geninfo_all_blocks=1 00:16:41.627 --rc geninfo_unexecuted_blocks=1 00:16:41.627 00:16:41.627 ' 00:16:41.627 22:16:38 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.627 22:16:38 -- nvmf/common.sh@7 -- # uname -s 00:16:41.627 22:16:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.627 22:16:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.627 22:16:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.627 22:16:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.627 22:16:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.627 22:16:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.627 22:16:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.627 22:16:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.627 22:16:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.627 22:16:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.627 22:16:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:16:41.627 22:16:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:16:41.627 22:16:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.627 22:16:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.627 22:16:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.627 22:16:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.627 22:16:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.627 22:16:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.627 22:16:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.627 22:16:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.627 22:16:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.627 22:16:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.627 22:16:38 -- paths/export.sh@5 -- # export PATH 00:16:41.628 22:16:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.628 22:16:38 -- nvmf/common.sh@46 -- # : 0 00:16:41.628 22:16:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:41.628 22:16:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:41.628 22:16:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:41.628 22:16:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.628 22:16:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.628 22:16:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:41.628 22:16:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:41.628 22:16:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:41.628 22:16:38 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:41.628 22:16:38 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:41.628 22:16:38 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:41.628 22:16:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:41.628 22:16:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.628 22:16:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:41.628 22:16:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:41.628 22:16:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:41.628 22:16:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.628 22:16:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.628 22:16:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.628 22:16:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:41.628 22:16:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:41.628 22:16:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:41.628 22:16:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:41.628 22:16:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:41.628 22:16:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:41.628 22:16:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.628 22:16:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.628 22:16:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:41.628 22:16:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:41.628 22:16:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:41.628 22:16:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:41.628 22:16:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:41.628 22:16:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.628 22:16:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:41.628 22:16:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:41.628 22:16:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:41.628 22:16:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:41.628 22:16:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:41.628 22:16:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:41.628 Cannot find device "nvmf_tgt_br" 00:16:41.628 22:16:38 -- nvmf/common.sh@154 -- # true 00:16:41.628 22:16:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.628 Cannot find device "nvmf_tgt_br2" 00:16:41.628 22:16:38 -- nvmf/common.sh@155 -- # true 00:16:41.628 22:16:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:41.628 22:16:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:41.628 Cannot find device "nvmf_tgt_br" 00:16:41.628 22:16:38 -- nvmf/common.sh@157 -- # true 00:16:41.628 22:16:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:41.628 Cannot find device "nvmf_tgt_br2" 00:16:41.628 22:16:38 -- nvmf/common.sh@158 -- # true 00:16:41.628 22:16:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:41.628 22:16:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:41.887 22:16:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.887 22:16:38 -- nvmf/common.sh@161 -- # true 00:16:41.887 22:16:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.887 22:16:38 -- nvmf/common.sh@162 -- # true 00:16:41.887 22:16:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:41.887 22:16:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:41.887 22:16:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:41.887 22:16:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:41.887 22:16:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:41.887 22:16:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:41.887 22:16:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:41.887 22:16:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:41.887 22:16:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:41.887 22:16:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:41.887 22:16:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:41.887 22:16:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:41.887 22:16:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:41.887 22:16:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.887 22:16:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:41.887 22:16:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:41.887 22:16:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:41.887 22:16:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:41.887 22:16:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:41.887 22:16:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:41.887 22:16:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:41.887 22:16:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:41.887 22:16:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:41.887 22:16:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:41.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:41.887 00:16:41.887 --- 10.0.0.2 ping statistics --- 00:16:41.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.887 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:41.887 22:16:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:41.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:41.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:41.887 00:16:41.887 --- 10.0.0.3 ping statistics --- 00:16:41.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.887 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:41.887 22:16:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:41.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:41.887 00:16:41.887 --- 10.0.0.1 ping statistics --- 00:16:41.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.887 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:41.887 22:16:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.887 22:16:38 -- nvmf/common.sh@421 -- # return 0 00:16:41.887 22:16:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:41.887 22:16:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.887 22:16:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:41.887 22:16:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:41.887 22:16:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.887 22:16:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:41.887 22:16:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:41.887 22:16:38 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:41.887 22:16:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:41.887 22:16:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.887 22:16:38 -- common/autotest_common.sh@10 -- # set +x 00:16:41.887 22:16:38 -- nvmf/common.sh@469 -- # nvmfpid=77423 00:16:41.887 22:16:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:41.887 22:16:38 -- nvmf/common.sh@470 -- # waitforlisten 77423 00:16:41.887 22:16:38 -- common/autotest_common.sh@829 -- # '[' -z 77423 ']' 00:16:41.887 22:16:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.887 22:16:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.887 22:16:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.887 22:16:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.887 22:16:38 -- common/autotest_common.sh@10 -- # set +x 00:16:42.148 [2024-11-17 22:16:38.534956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:42.148 [2024-11-17 22:16:38.535027] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:42.148 [2024-11-17 22:16:38.670635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.407 [2024-11-17 22:16:38.772980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:42.407 [2024-11-17 22:16:38.773130] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.407 [2024-11-17 22:16:38.773142] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.407 [2024-11-17 22:16:38.773149] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.407 [2024-11-17 22:16:38.773476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:42.407 [2024-11-17 22:16:38.773645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:42.407 [2024-11-17 22:16:38.773789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:42.407 [2024-11-17 22:16:38.773796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.973 22:16:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.973 22:16:39 -- common/autotest_common.sh@862 -- # return 0 00:16:42.973 22:16:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:42.973 22:16:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.973 22:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:43.232 22:16:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.232 22:16:39 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.232 22:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.232 22:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:43.232 [2024-11-17 22:16:39.602982] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.232 22:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.232 22:16:39 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:43.232 22:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.232 22:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:43.232 Malloc0 00:16:43.232 22:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.232 22:16:39 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:43.232 22:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.232 22:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:43.232 22:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.232 22:16:39 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:43.232 22:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.232 22:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:43.232 22:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.232 22:16:39 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.232 22:16:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.232 22:16:39 -- common/autotest_common.sh@10 -- # set +x 00:16:43.232 [2024-11-17 22:16:39.641576] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.232 22:16:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.232 22:16:39 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:43.232 22:16:39 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:43.232 22:16:39 -- nvmf/common.sh@520 -- # config=() 00:16:43.232 22:16:39 -- nvmf/common.sh@520 -- # local subsystem config 00:16:43.232 22:16:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:43.232 22:16:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:43.232 { 00:16:43.232 "params": { 00:16:43.232 "name": "Nvme$subsystem", 00:16:43.232 "trtype": "$TEST_TRANSPORT", 00:16:43.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:43.232 "adrfam": "ipv4", 00:16:43.233 "trsvcid": "$NVMF_PORT", 00:16:43.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:43.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:43.233 "hdgst": ${hdgst:-false}, 00:16:43.233 "ddgst": ${ddgst:-false} 00:16:43.233 }, 00:16:43.233 "method": "bdev_nvme_attach_controller" 00:16:43.233 } 00:16:43.233 EOF 00:16:43.233 )") 00:16:43.233 22:16:39 -- nvmf/common.sh@542 -- # cat 00:16:43.233 22:16:39 -- nvmf/common.sh@544 -- # jq . 00:16:43.233 22:16:39 -- nvmf/common.sh@545 -- # IFS=, 00:16:43.233 22:16:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:43.233 "params": { 00:16:43.233 "name": "Nvme1", 00:16:43.233 "trtype": "tcp", 00:16:43.233 "traddr": "10.0.0.2", 00:16:43.233 "adrfam": "ipv4", 00:16:43.233 "trsvcid": "4420", 00:16:43.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:43.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:43.233 "hdgst": false, 00:16:43.233 "ddgst": false 00:16:43.233 }, 00:16:43.233 "method": "bdev_nvme_attach_controller" 00:16:43.233 }' 00:16:43.233 [2024-11-17 22:16:39.701460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:43.233 [2024-11-17 22:16:39.701574] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77479 ] 00:16:43.491 [2024-11-17 22:16:39.849433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:43.491 [2024-11-17 22:16:39.962089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.491 [2024-11-17 22:16:39.962204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.491 [2024-11-17 22:16:39.962210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.750 [2024-11-17 22:16:40.125923] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:43.750 [2024-11-17 22:16:40.125962] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:43.750 I/O targets: 00:16:43.750 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:43.750 00:16:43.750 00:16:43.750 CUnit - A unit testing framework for C - Version 2.1-3 00:16:43.750 http://cunit.sourceforge.net/ 00:16:43.750 00:16:43.750 00:16:43.750 Suite: bdevio tests on: Nvme1n1 00:16:43.750 Test: blockdev write read block ...passed 00:16:43.750 Test: blockdev write zeroes read block ...passed 00:16:43.750 Test: blockdev write zeroes read no split ...passed 00:16:43.750 Test: blockdev write zeroes read split ...passed 00:16:43.750 Test: blockdev write zeroes read split partial ...passed 00:16:43.750 Test: blockdev reset ...[2024-11-17 22:16:40.257229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:43.750 [2024-11-17 22:16:40.257318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f91c0 (9): Bad file descriptor 00:16:43.750 [2024-11-17 22:16:40.274054] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:43.750 passed 00:16:43.750 Test: blockdev write read 8 blocks ...passed 00:16:43.750 Test: blockdev write read size > 128k ...passed 00:16:43.750 Test: blockdev write read invalid size ...passed 00:16:43.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:43.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:43.750 Test: blockdev write read max offset ...passed 00:16:44.010 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:44.010 Test: blockdev writev readv 8 blocks ...passed 00:16:44.010 Test: blockdev writev readv 30 x 1block ...passed 00:16:44.010 Test: blockdev writev readv block ...passed 00:16:44.010 Test: blockdev writev readv size > 128k ...passed 00:16:44.010 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:44.010 Test: blockdev comparev and writev ...[2024-11-17 22:16:40.448235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:44.010 [2024-11-17 22:16:40.448383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.448466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:44.010 [2024-11-17 22:16:40.448550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.449210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:44.010 [2024-11-17 22:16:40.449325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.449401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:44.010 [2024-11-17 22:16:40.449467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.450029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:44.010 [2024-11-17 22:16:40.450127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.450208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:44.010 [2024-11-17 22:16:40.450305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.450932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:44.010 [2024-11-17 22:16:40.451020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.451096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:44.010 [2024-11-17 22:16:40.451159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:44.010 passed 00:16:44.010 Test: blockdev nvme passthru rw ...passed 00:16:44.010 Test: blockdev nvme passthru vendor specific ...[2024-11-17 22:16:40.533074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:44.010 [2024-11-17 22:16:40.533248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.533587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:44.010 [2024-11-17 22:16:40.533682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.534185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:44.010 [2024-11-17 22:16:40.534264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:44.010 [2024-11-17 22:16:40.534634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:44.010 [2024-11-17 22:16:40.534721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:44.010 passed 00:16:44.010 Test: blockdev nvme admin passthru ...passed 00:16:44.010 Test: blockdev copy ...passed 00:16:44.010 00:16:44.010 Run Summary: Type Total Ran Passed Failed Inactive 00:16:44.010 suites 1 1 n/a 0 0 00:16:44.010 tests 23 23 23 0 0 00:16:44.010 asserts 152 152 152 0 n/a 00:16:44.010 00:16:44.010 Elapsed time = 0.923 seconds 00:16:44.577 22:16:40 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.577 22:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.577 22:16:40 -- common/autotest_common.sh@10 -- # set +x 00:16:44.577 22:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.577 22:16:41 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:44.577 22:16:41 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:44.577 22:16:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:44.577 22:16:41 -- nvmf/common.sh@116 -- # sync 00:16:44.577 22:16:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:44.577 22:16:41 -- nvmf/common.sh@119 -- # set +e 00:16:44.577 22:16:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:44.577 22:16:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:44.577 rmmod nvme_tcp 00:16:44.577 rmmod nvme_fabrics 00:16:44.577 rmmod nvme_keyring 00:16:44.577 22:16:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:44.577 22:16:41 -- nvmf/common.sh@123 -- # set -e 00:16:44.577 22:16:41 -- nvmf/common.sh@124 -- # return 0 00:16:44.577 22:16:41 -- nvmf/common.sh@477 -- # '[' -n 77423 ']' 00:16:44.577 22:16:41 -- nvmf/common.sh@478 -- # killprocess 77423 00:16:44.577 22:16:41 -- common/autotest_common.sh@936 -- # '[' -z 77423 ']' 00:16:44.577 22:16:41 -- common/autotest_common.sh@940 -- # kill -0 77423 00:16:44.577 22:16:41 -- common/autotest_common.sh@941 -- # uname 00:16:44.577 22:16:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.577 22:16:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77423 00:16:44.577 22:16:41 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:44.577 22:16:41 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:44.577 killing process with pid 77423 00:16:44.577 22:16:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77423' 00:16:44.577 22:16:41 -- common/autotest_common.sh@955 -- # kill 77423 00:16:44.577 22:16:41 -- common/autotest_common.sh@960 -- # wait 77423 00:16:45.143 22:16:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:45.143 22:16:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:45.143 22:16:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:45.143 22:16:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.143 22:16:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:45.143 22:16:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.143 22:16:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.143 22:16:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.143 22:16:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:45.143 00:16:45.143 real 0m3.664s 00:16:45.143 user 0m12.953s 00:16:45.143 sys 0m1.295s 00:16:45.143 22:16:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:45.143 22:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:45.143 ************************************ 00:16:45.143 END TEST nvmf_bdevio_no_huge 00:16:45.143 ************************************ 00:16:45.143 22:16:41 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:45.143 22:16:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:45.143 22:16:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:45.143 22:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:45.143 ************************************ 00:16:45.143 START TEST nvmf_tls 00:16:45.143 ************************************ 00:16:45.143 22:16:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:45.143 * Looking for test storage... 00:16:45.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:45.143 22:16:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:45.143 22:16:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:45.143 22:16:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:45.143 22:16:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:45.143 22:16:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:45.143 22:16:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:45.143 22:16:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:45.143 22:16:41 -- scripts/common.sh@335 -- # IFS=.-: 00:16:45.143 22:16:41 -- scripts/common.sh@335 -- # read -ra ver1 00:16:45.143 22:16:41 -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.143 22:16:41 -- scripts/common.sh@336 -- # read -ra ver2 00:16:45.143 22:16:41 -- scripts/common.sh@337 -- # local 'op=<' 00:16:45.143 22:16:41 -- scripts/common.sh@339 -- # ver1_l=2 00:16:45.143 22:16:41 -- scripts/common.sh@340 -- # ver2_l=1 00:16:45.143 22:16:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:45.143 22:16:41 -- scripts/common.sh@343 -- # case "$op" in 00:16:45.143 22:16:41 -- scripts/common.sh@344 -- # : 1 00:16:45.143 22:16:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:45.143 22:16:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.143 22:16:41 -- scripts/common.sh@364 -- # decimal 1 00:16:45.143 22:16:41 -- scripts/common.sh@352 -- # local d=1 00:16:45.143 22:16:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.143 22:16:41 -- scripts/common.sh@354 -- # echo 1 00:16:45.143 22:16:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:45.143 22:16:41 -- scripts/common.sh@365 -- # decimal 2 00:16:45.143 22:16:41 -- scripts/common.sh@352 -- # local d=2 00:16:45.143 22:16:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.143 22:16:41 -- scripts/common.sh@354 -- # echo 2 00:16:45.143 22:16:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:45.143 22:16:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:45.143 22:16:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:45.143 22:16:41 -- scripts/common.sh@367 -- # return 0 00:16:45.143 22:16:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.143 22:16:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:45.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.143 --rc genhtml_branch_coverage=1 00:16:45.143 --rc genhtml_function_coverage=1 00:16:45.143 --rc genhtml_legend=1 00:16:45.143 --rc geninfo_all_blocks=1 00:16:45.143 --rc geninfo_unexecuted_blocks=1 00:16:45.143 00:16:45.143 ' 00:16:45.143 22:16:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:45.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.143 --rc genhtml_branch_coverage=1 00:16:45.143 --rc genhtml_function_coverage=1 00:16:45.143 --rc genhtml_legend=1 00:16:45.143 --rc geninfo_all_blocks=1 00:16:45.143 --rc geninfo_unexecuted_blocks=1 00:16:45.143 00:16:45.143 ' 00:16:45.143 22:16:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:45.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.144 --rc genhtml_branch_coverage=1 00:16:45.144 --rc genhtml_function_coverage=1 00:16:45.144 --rc genhtml_legend=1 00:16:45.144 --rc geninfo_all_blocks=1 00:16:45.144 --rc geninfo_unexecuted_blocks=1 00:16:45.144 00:16:45.144 ' 00:16:45.144 22:16:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:45.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.144 --rc genhtml_branch_coverage=1 00:16:45.144 --rc genhtml_function_coverage=1 00:16:45.144 --rc genhtml_legend=1 00:16:45.144 --rc geninfo_all_blocks=1 00:16:45.144 --rc geninfo_unexecuted_blocks=1 00:16:45.144 00:16:45.144 ' 00:16:45.144 22:16:41 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.144 22:16:41 -- nvmf/common.sh@7 -- # uname -s 00:16:45.144 22:16:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.144 22:16:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.144 22:16:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.144 22:16:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.144 22:16:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.144 22:16:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.144 22:16:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.144 22:16:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.144 22:16:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.144 22:16:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.402 22:16:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:16:45.402 22:16:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:16:45.402 22:16:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.402 22:16:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.402 22:16:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.402 22:16:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.402 22:16:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.402 22:16:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.402 22:16:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.402 22:16:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.402 22:16:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.402 22:16:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.402 22:16:41 -- paths/export.sh@5 -- # export PATH 00:16:45.402 22:16:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.402 22:16:41 -- nvmf/common.sh@46 -- # : 0 00:16:45.402 22:16:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:45.402 22:16:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:45.402 22:16:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:45.402 22:16:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.402 22:16:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.402 22:16:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:45.402 22:16:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:45.402 22:16:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:45.402 22:16:41 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.402 22:16:41 -- target/tls.sh@71 -- # nvmftestinit 00:16:45.402 22:16:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:45.402 22:16:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.402 22:16:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:45.402 22:16:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:45.402 22:16:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:45.402 22:16:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.402 22:16:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.402 22:16:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.402 22:16:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:45.402 22:16:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:45.402 22:16:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:45.402 22:16:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:45.402 22:16:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:45.402 22:16:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:45.402 22:16:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.402 22:16:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.402 22:16:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:45.402 22:16:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:45.402 22:16:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.402 22:16:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.402 22:16:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.402 22:16:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.402 22:16:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.402 22:16:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.402 22:16:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.402 22:16:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.402 22:16:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:45.402 22:16:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:45.402 Cannot find device "nvmf_tgt_br" 00:16:45.402 22:16:41 -- nvmf/common.sh@154 -- # true 00:16:45.402 22:16:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.402 Cannot find device "nvmf_tgt_br2" 00:16:45.402 22:16:41 -- nvmf/common.sh@155 -- # true 00:16:45.402 22:16:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:45.402 22:16:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:45.402 Cannot find device "nvmf_tgt_br" 00:16:45.402 22:16:41 -- nvmf/common.sh@157 -- # true 00:16:45.402 22:16:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:45.402 Cannot find device "nvmf_tgt_br2" 00:16:45.402 22:16:41 -- nvmf/common.sh@158 -- # true 00:16:45.403 22:16:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:45.403 22:16:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:45.403 22:16:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.403 22:16:41 -- nvmf/common.sh@161 -- # true 00:16:45.403 22:16:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.403 22:16:41 -- nvmf/common.sh@162 -- # true 00:16:45.403 22:16:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:45.403 22:16:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:45.403 22:16:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:45.403 22:16:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:45.403 22:16:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:45.403 22:16:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.403 22:16:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.403 22:16:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:45.403 22:16:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:45.403 22:16:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:45.403 22:16:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:45.661 22:16:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:45.661 22:16:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:45.661 22:16:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.661 22:16:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.661 22:16:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.661 22:16:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:45.661 22:16:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:45.661 22:16:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.661 22:16:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.661 22:16:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.661 22:16:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.661 22:16:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.661 22:16:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:45.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:45.661 00:16:45.661 --- 10.0.0.2 ping statistics --- 00:16:45.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.661 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:45.661 22:16:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:45.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:45.661 00:16:45.661 --- 10.0.0.3 ping statistics --- 00:16:45.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.662 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:45.662 22:16:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:45.662 00:16:45.662 --- 10.0.0.1 ping statistics --- 00:16:45.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.662 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:45.662 22:16:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.662 22:16:42 -- nvmf/common.sh@421 -- # return 0 00:16:45.662 22:16:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:45.662 22:16:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.662 22:16:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:45.662 22:16:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:45.662 22:16:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.662 22:16:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:45.662 22:16:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:45.662 22:16:42 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:45.662 22:16:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:45.662 22:16:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.662 22:16:42 -- common/autotest_common.sh@10 -- # set +x 00:16:45.662 22:16:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:45.662 22:16:42 -- nvmf/common.sh@469 -- # nvmfpid=77677 00:16:45.662 22:16:42 -- nvmf/common.sh@470 -- # waitforlisten 77677 00:16:45.662 22:16:42 -- common/autotest_common.sh@829 -- # '[' -z 77677 ']' 00:16:45.662 22:16:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.662 22:16:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.662 22:16:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.662 22:16:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.662 22:16:42 -- common/autotest_common.sh@10 -- # set +x 00:16:45.662 [2024-11-17 22:16:42.225876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:45.662 [2024-11-17 22:16:42.225967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.920 [2024-11-17 22:16:42.370612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.921 [2024-11-17 22:16:42.482572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:45.921 [2024-11-17 22:16:42.482770] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.921 [2024-11-17 22:16:42.482788] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.921 [2024-11-17 22:16:42.482800] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.921 [2024-11-17 22:16:42.482844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.855 22:16:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.855 22:16:43 -- common/autotest_common.sh@862 -- # return 0 00:16:46.855 22:16:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:46.855 22:16:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.855 22:16:43 -- common/autotest_common.sh@10 -- # set +x 00:16:46.855 22:16:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.855 22:16:43 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:46.855 22:16:43 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:47.113 true 00:16:47.113 22:16:43 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:47.113 22:16:43 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:47.372 22:16:43 -- target/tls.sh@82 -- # version=0 00:16:47.372 22:16:43 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:47.372 22:16:43 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:47.632 22:16:44 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:47.632 22:16:44 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:47.890 22:16:44 -- target/tls.sh@90 -- # version=13 00:16:47.890 22:16:44 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:47.890 22:16:44 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:48.149 22:16:44 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:48.149 22:16:44 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:48.407 22:16:44 -- target/tls.sh@98 -- # version=7 00:16:48.407 22:16:44 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:48.407 22:16:44 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:48.407 22:16:44 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:48.665 22:16:45 -- target/tls.sh@105 -- # ktls=false 00:16:48.665 22:16:45 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:48.665 22:16:45 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:48.923 22:16:45 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:48.923 22:16:45 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:49.182 22:16:45 -- target/tls.sh@113 -- # ktls=true 00:16:49.182 22:16:45 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:49.182 22:16:45 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:49.182 22:16:45 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:49.182 22:16:45 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:49.440 22:16:46 -- target/tls.sh@121 -- # ktls=false 00:16:49.441 22:16:46 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:49.441 22:16:46 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:49.441 22:16:46 -- target/tls.sh@49 -- # local key hash crc 00:16:49.441 22:16:46 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:49.441 22:16:46 -- target/tls.sh@51 -- # hash=01 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # gzip -1 -c 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # tail -c8 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # head -c 4 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # crc='p$H�' 00:16:49.441 22:16:46 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:49.441 22:16:46 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:49.441 22:16:46 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:49.441 22:16:46 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:49.441 22:16:46 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:49.441 22:16:46 -- target/tls.sh@49 -- # local key hash crc 00:16:49.441 22:16:46 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:49.441 22:16:46 -- target/tls.sh@51 -- # hash=01 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # gzip -1 -c 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # tail -c8 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # head -c 4 00:16:49.441 22:16:46 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:49.441 22:16:46 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:49.441 22:16:46 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:49.441 22:16:46 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:49.441 22:16:46 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:49.441 22:16:46 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:49.441 22:16:46 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:49.441 22:16:46 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:49.441 22:16:46 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:49.441 22:16:46 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:49.441 22:16:46 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:49.441 22:16:46 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:50.008 22:16:46 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:50.266 22:16:46 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:50.266 22:16:46 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:50.266 22:16:46 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:50.524 [2024-11-17 22:16:46.929713] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.524 22:16:46 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:50.782 22:16:47 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:50.782 [2024-11-17 22:16:47.385804] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:50.782 [2024-11-17 22:16:47.386046] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.040 22:16:47 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:51.040 malloc0 00:16:51.040 22:16:47 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:51.299 22:16:47 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:51.558 22:16:47 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:03.764 Initializing NVMe Controllers 00:17:03.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:03.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:03.764 Initialization complete. Launching workers. 00:17:03.764 ======================================================== 00:17:03.764 Latency(us) 00:17:03.764 Device Information : IOPS MiB/s Average min max 00:17:03.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11880.34 46.41 5387.88 1711.64 9520.23 00:17:03.764 ======================================================== 00:17:03.764 Total : 11880.34 46.41 5387.88 1711.64 9520.23 00:17:03.764 00:17:03.764 22:16:58 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:03.764 22:16:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:03.764 22:16:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:03.764 22:16:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:03.764 22:16:58 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:03.764 22:16:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:03.764 22:16:58 -- target/tls.sh@28 -- # bdevperf_pid=78041 00:17:03.764 22:16:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:03.764 22:16:58 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:03.764 22:16:58 -- target/tls.sh@31 -- # waitforlisten 78041 /var/tmp/bdevperf.sock 00:17:03.764 22:16:58 -- common/autotest_common.sh@829 -- # '[' -z 78041 ']' 00:17:03.764 22:16:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.764 22:16:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.764 22:16:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.764 22:16:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.764 22:16:58 -- common/autotest_common.sh@10 -- # set +x 00:17:03.764 [2024-11-17 22:16:58.249684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:03.764 [2024-11-17 22:16:58.249810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78041 ] 00:17:03.764 [2024-11-17 22:16:58.390370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.764 [2024-11-17 22:16:58.484220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.764 22:16:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.764 22:16:59 -- common/autotest_common.sh@862 -- # return 0 00:17:03.764 22:16:59 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:03.764 [2024-11-17 22:16:59.346232] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:03.764 TLSTESTn1 00:17:03.764 22:16:59 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:03.764 Running I/O for 10 seconds... 00:17:13.755 00:17:13.756 Latency(us) 00:17:13.756 [2024-11-17T22:17:10.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.756 [2024-11-17T22:17:10.371Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:13.756 Verification LBA range: start 0x0 length 0x2000 00:17:13.756 TLSTESTn1 : 10.01 6689.83 26.13 0.00 0.00 19104.07 4766.25 20494.89 00:17:13.756 [2024-11-17T22:17:10.371Z] =================================================================================================================== 00:17:13.756 [2024-11-17T22:17:10.371Z] Total : 6689.83 26.13 0.00 0.00 19104.07 4766.25 20494.89 00:17:13.756 0 00:17:13.756 22:17:09 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:13.756 22:17:09 -- target/tls.sh@45 -- # killprocess 78041 00:17:13.756 22:17:09 -- common/autotest_common.sh@936 -- # '[' -z 78041 ']' 00:17:13.756 22:17:09 -- common/autotest_common.sh@940 -- # kill -0 78041 00:17:13.756 22:17:09 -- common/autotest_common.sh@941 -- # uname 00:17:13.756 22:17:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.756 22:17:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78041 00:17:13.756 killing process with pid 78041 00:17:13.756 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.756 00:17:13.756 Latency(us) 00:17:13.756 [2024-11-17T22:17:10.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.756 [2024-11-17T22:17:10.371Z] =================================================================================================================== 00:17:13.756 [2024-11-17T22:17:10.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.756 22:17:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:13.756 22:17:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:13.756 22:17:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78041' 00:17:13.756 22:17:09 -- common/autotest_common.sh@955 -- # kill 78041 00:17:13.756 22:17:09 -- common/autotest_common.sh@960 -- # wait 78041 00:17:13.756 22:17:09 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:13.756 22:17:09 -- common/autotest_common.sh@650 -- # local es=0 00:17:13.756 22:17:09 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:13.756 22:17:09 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:13.756 22:17:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.756 22:17:09 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:13.756 22:17:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.756 22:17:09 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:13.756 22:17:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.756 22:17:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.756 22:17:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.756 22:17:09 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:13.756 22:17:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.756 22:17:09 -- target/tls.sh@28 -- # bdevperf_pid=78194 00:17:13.756 22:17:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.756 22:17:09 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.756 22:17:09 -- target/tls.sh@31 -- # waitforlisten 78194 /var/tmp/bdevperf.sock 00:17:13.756 22:17:09 -- common/autotest_common.sh@829 -- # '[' -z 78194 ']' 00:17:13.756 22:17:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.756 22:17:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.756 22:17:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.756 22:17:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.756 22:17:09 -- common/autotest_common.sh@10 -- # set +x 00:17:13.756 [2024-11-17 22:17:09.992582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:13.756 [2024-11-17 22:17:09.992904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78194 ] 00:17:13.756 [2024-11-17 22:17:10.130370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.756 [2024-11-17 22:17:10.210751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.370 22:17:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.370 22:17:10 -- common/autotest_common.sh@862 -- # return 0 00:17:14.370 22:17:10 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:14.629 [2024-11-17 22:17:11.153835] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:14.629 [2024-11-17 22:17:11.163978] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:14.629 [2024-11-17 22:17:11.164086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfc3d0 (107): Transport endpoint is not connected 00:17:14.629 [2024-11-17 22:17:11.165073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfc3d0 (9): Bad file descriptor 00:17:14.629 [2024-11-17 22:17:11.166070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:14.629 [2024-11-17 22:17:11.166103] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:14.629 [2024-11-17 22:17:11.166112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:14.629 2024/11/17 22:17:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:14.629 request: 00:17:14.629 { 00:17:14.629 "method": "bdev_nvme_attach_controller", 00:17:14.629 "params": { 00:17:14.629 "name": "TLSTEST", 00:17:14.629 "trtype": "tcp", 00:17:14.629 "traddr": "10.0.0.2", 00:17:14.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.629 "adrfam": "ipv4", 00:17:14.629 "trsvcid": "4420", 00:17:14.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.629 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:14.629 } 00:17:14.629 } 00:17:14.629 Got JSON-RPC error response 00:17:14.629 GoRPCClient: error on JSON-RPC call 00:17:14.629 22:17:11 -- target/tls.sh@36 -- # killprocess 78194 00:17:14.629 22:17:11 -- common/autotest_common.sh@936 -- # '[' -z 78194 ']' 00:17:14.629 22:17:11 -- common/autotest_common.sh@940 -- # kill -0 78194 00:17:14.629 22:17:11 -- common/autotest_common.sh@941 -- # uname 00:17:14.629 22:17:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.629 22:17:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78194 00:17:14.629 killing process with pid 78194 00:17:14.629 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.629 00:17:14.629 Latency(us) 00:17:14.629 [2024-11-17T22:17:11.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.629 [2024-11-17T22:17:11.244Z] =================================================================================================================== 00:17:14.629 [2024-11-17T22:17:11.244Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.629 22:17:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:14.629 22:17:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:14.629 22:17:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78194' 00:17:14.629 22:17:11 -- common/autotest_common.sh@955 -- # kill 78194 00:17:14.629 22:17:11 -- common/autotest_common.sh@960 -- # wait 78194 00:17:15.197 22:17:11 -- target/tls.sh@37 -- # return 1 00:17:15.197 22:17:11 -- common/autotest_common.sh@653 -- # es=1 00:17:15.197 22:17:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:15.197 22:17:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:15.197 22:17:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:15.197 22:17:11 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:15.197 22:17:11 -- common/autotest_common.sh@650 -- # local es=0 00:17:15.197 22:17:11 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:15.197 22:17:11 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:15.197 22:17:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.197 22:17:11 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:15.197 22:17:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.197 22:17:11 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:15.197 22:17:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:15.197 22:17:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:15.197 22:17:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:15.197 22:17:11 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:15.197 22:17:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:15.197 22:17:11 -- target/tls.sh@28 -- # bdevperf_pid=78234 00:17:15.197 22:17:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:15.197 22:17:11 -- target/tls.sh@31 -- # waitforlisten 78234 /var/tmp/bdevperf.sock 00:17:15.197 22:17:11 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:15.197 22:17:11 -- common/autotest_common.sh@829 -- # '[' -z 78234 ']' 00:17:15.197 22:17:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.198 22:17:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.198 22:17:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.198 22:17:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.198 22:17:11 -- common/autotest_common.sh@10 -- # set +x 00:17:15.198 [2024-11-17 22:17:11.573298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:15.198 [2024-11-17 22:17:11.573602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78234 ] 00:17:15.198 [2024-11-17 22:17:11.709085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.198 [2024-11-17 22:17:11.782277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.135 22:17:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.135 22:17:12 -- common/autotest_common.sh@862 -- # return 0 00:17:16.135 22:17:12 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.135 [2024-11-17 22:17:12.686500] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.135 [2024-11-17 22:17:12.694013] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:16.135 [2024-11-17 22:17:12.694057] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:16.135 [2024-11-17 22:17:12.694151] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:16.135 [2024-11-17 22:17:12.694789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d43d0 (107): Transport endpoint is not connected 00:17:16.135 [2024-11-17 22:17:12.695758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d43d0 (9): Bad file descriptor 00:17:16.135 [2024-11-17 22:17:12.696761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:16.135 [2024-11-17 22:17:12.696793] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:16.135 [2024-11-17 22:17:12.696802] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:16.135 2024/11/17 22:17:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:16.135 request: 00:17:16.135 { 00:17:16.135 "method": "bdev_nvme_attach_controller", 00:17:16.135 "params": { 00:17:16.135 "name": "TLSTEST", 00:17:16.135 "trtype": "tcp", 00:17:16.135 "traddr": "10.0.0.2", 00:17:16.135 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:16.135 "adrfam": "ipv4", 00:17:16.135 "trsvcid": "4420", 00:17:16.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.135 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:16.135 } 00:17:16.135 } 00:17:16.135 Got JSON-RPC error response 00:17:16.135 GoRPCClient: error on JSON-RPC call 00:17:16.135 22:17:12 -- target/tls.sh@36 -- # killprocess 78234 00:17:16.135 22:17:12 -- common/autotest_common.sh@936 -- # '[' -z 78234 ']' 00:17:16.135 22:17:12 -- common/autotest_common.sh@940 -- # kill -0 78234 00:17:16.135 22:17:12 -- common/autotest_common.sh@941 -- # uname 00:17:16.135 22:17:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.135 22:17:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78234 00:17:16.394 killing process with pid 78234 00:17:16.394 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.394 00:17:16.394 Latency(us) 00:17:16.394 [2024-11-17T22:17:13.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.394 [2024-11-17T22:17:13.009Z] =================================================================================================================== 00:17:16.394 [2024-11-17T22:17:13.009Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:16.394 22:17:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:16.394 22:17:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:16.394 22:17:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78234' 00:17:16.394 22:17:12 -- common/autotest_common.sh@955 -- # kill 78234 00:17:16.394 22:17:12 -- common/autotest_common.sh@960 -- # wait 78234 00:17:16.654 22:17:13 -- target/tls.sh@37 -- # return 1 00:17:16.654 22:17:13 -- common/autotest_common.sh@653 -- # es=1 00:17:16.654 22:17:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.654 22:17:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.654 22:17:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.654 22:17:13 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.654 22:17:13 -- common/autotest_common.sh@650 -- # local es=0 00:17:16.654 22:17:13 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.654 22:17:13 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:16.654 22:17:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.654 22:17:13 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:16.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.654 22:17:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.654 22:17:13 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.654 22:17:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.654 22:17:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:16.654 22:17:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.654 22:17:13 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:16.654 22:17:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.654 22:17:13 -- target/tls.sh@28 -- # bdevperf_pid=78284 00:17:16.654 22:17:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.654 22:17:13 -- target/tls.sh@31 -- # waitforlisten 78284 /var/tmp/bdevperf.sock 00:17:16.654 22:17:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.654 22:17:13 -- common/autotest_common.sh@829 -- # '[' -z 78284 ']' 00:17:16.654 22:17:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.654 22:17:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.654 22:17:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.654 22:17:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.654 22:17:13 -- common/autotest_common.sh@10 -- # set +x 00:17:16.654 [2024-11-17 22:17:13.095379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:16.654 [2024-11-17 22:17:13.095643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78284 ] 00:17:16.655 [2024-11-17 22:17:13.225771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.914 [2024-11-17 22:17:13.309327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.483 22:17:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.483 22:17:13 -- common/autotest_common.sh@862 -- # return 0 00:17:17.483 22:17:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.743 [2024-11-17 22:17:14.168048] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.743 [2024-11-17 22:17:14.174221] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:17.743 [2024-11-17 22:17:14.174271] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:17.743 [2024-11-17 22:17:14.174328] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:17.743 [2024-11-17 22:17:14.175318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8273d0 (107): Transport endpoint is not connected 00:17:17.743 [2024-11-17 22:17:14.176295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8273d0 (9): Bad file descriptor 00:17:17.743 [2024-11-17 22:17:14.177291] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:17.743 [2024-11-17 22:17:14.177310] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:17.743 [2024-11-17 22:17:14.177327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:17.743 2024/11/17 22:17:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:17.743 request: 00:17:17.743 { 00:17:17.743 "method": "bdev_nvme_attach_controller", 00:17:17.743 "params": { 00:17:17.743 "name": "TLSTEST", 00:17:17.743 "trtype": "tcp", 00:17:17.743 "traddr": "10.0.0.2", 00:17:17.743 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.743 "adrfam": "ipv4", 00:17:17.743 "trsvcid": "4420", 00:17:17.743 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:17.743 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:17.743 } 00:17:17.743 } 00:17:17.743 Got JSON-RPC error response 00:17:17.743 GoRPCClient: error on JSON-RPC call 00:17:17.743 22:17:14 -- target/tls.sh@36 -- # killprocess 78284 00:17:17.743 22:17:14 -- common/autotest_common.sh@936 -- # '[' -z 78284 ']' 00:17:17.743 22:17:14 -- common/autotest_common.sh@940 -- # kill -0 78284 00:17:17.743 22:17:14 -- common/autotest_common.sh@941 -- # uname 00:17:17.743 22:17:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.743 22:17:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78284 00:17:17.743 killing process with pid 78284 00:17:17.743 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.743 00:17:17.743 Latency(us) 00:17:17.743 [2024-11-17T22:17:14.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.743 [2024-11-17T22:17:14.358Z] =================================================================================================================== 00:17:17.743 [2024-11-17T22:17:14.358Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:17.743 22:17:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:17.743 22:17:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:17.743 22:17:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78284' 00:17:17.743 22:17:14 -- common/autotest_common.sh@955 -- # kill 78284 00:17:17.743 22:17:14 -- common/autotest_common.sh@960 -- # wait 78284 00:17:18.003 22:17:14 -- target/tls.sh@37 -- # return 1 00:17:18.003 22:17:14 -- common/autotest_common.sh@653 -- # es=1 00:17:18.003 22:17:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.003 22:17:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.003 22:17:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.003 22:17:14 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:18.003 22:17:14 -- common/autotest_common.sh@650 -- # local es=0 00:17:18.003 22:17:14 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:18.003 22:17:14 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:18.003 22:17:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.003 22:17:14 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:18.003 22:17:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.003 22:17:14 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:18.003 22:17:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.003 22:17:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.003 22:17:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.003 22:17:14 -- target/tls.sh@23 -- # psk= 00:17:18.003 22:17:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.003 22:17:14 -- target/tls.sh@28 -- # bdevperf_pid=78325 00:17:18.003 22:17:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.003 22:17:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.003 22:17:14 -- target/tls.sh@31 -- # waitforlisten 78325 /var/tmp/bdevperf.sock 00:17:18.003 22:17:14 -- common/autotest_common.sh@829 -- # '[' -z 78325 ']' 00:17:18.003 22:17:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.003 22:17:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.003 22:17:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.003 22:17:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.003 22:17:14 -- common/autotest_common.sh@10 -- # set +x 00:17:18.003 [2024-11-17 22:17:14.587933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.003 [2024-11-17 22:17:14.588027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78325 ] 00:17:18.263 [2024-11-17 22:17:14.727026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.263 [2024-11-17 22:17:14.798566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.202 22:17:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.202 22:17:15 -- common/autotest_common.sh@862 -- # return 0 00:17:19.202 22:17:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:19.202 [2024-11-17 22:17:15.794404] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:19.202 [2024-11-17 22:17:15.796608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cedc0 (9): Bad file descriptor 00:17:19.202 [2024-11-17 22:17:15.797598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:19.202 [2024-11-17 22:17:15.797617] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:19.202 [2024-11-17 22:17:15.797627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.202 2024/11/17 22:17:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:19.202 request: 00:17:19.202 { 00:17:19.202 "method": "bdev_nvme_attach_controller", 00:17:19.202 "params": { 00:17:19.202 "name": "TLSTEST", 00:17:19.202 "trtype": "tcp", 00:17:19.202 "traddr": "10.0.0.2", 00:17:19.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.202 "adrfam": "ipv4", 00:17:19.202 "trsvcid": "4420", 00:17:19.202 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:19.202 } 00:17:19.202 } 00:17:19.202 Got JSON-RPC error response 00:17:19.202 GoRPCClient: error on JSON-RPC call 00:17:19.462 22:17:15 -- target/tls.sh@36 -- # killprocess 78325 00:17:19.462 22:17:15 -- common/autotest_common.sh@936 -- # '[' -z 78325 ']' 00:17:19.462 22:17:15 -- common/autotest_common.sh@940 -- # kill -0 78325 00:17:19.462 22:17:15 -- common/autotest_common.sh@941 -- # uname 00:17:19.462 22:17:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.462 22:17:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78325 00:17:19.462 killing process with pid 78325 00:17:19.462 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.462 00:17:19.462 Latency(us) 00:17:19.462 [2024-11-17T22:17:16.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.462 [2024-11-17T22:17:16.077Z] =================================================================================================================== 00:17:19.462 [2024-11-17T22:17:16.077Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.462 22:17:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:19.462 22:17:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:19.462 22:17:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78325' 00:17:19.462 22:17:15 -- common/autotest_common.sh@955 -- # kill 78325 00:17:19.462 22:17:15 -- common/autotest_common.sh@960 -- # wait 78325 00:17:19.722 22:17:16 -- target/tls.sh@37 -- # return 1 00:17:19.722 22:17:16 -- common/autotest_common.sh@653 -- # es=1 00:17:19.722 22:17:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.722 22:17:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.722 22:17:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.722 22:17:16 -- target/tls.sh@167 -- # killprocess 77677 00:17:19.722 22:17:16 -- common/autotest_common.sh@936 -- # '[' -z 77677 ']' 00:17:19.722 22:17:16 -- common/autotest_common.sh@940 -- # kill -0 77677 00:17:19.722 22:17:16 -- common/autotest_common.sh@941 -- # uname 00:17:19.722 22:17:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.722 22:17:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77677 00:17:19.722 killing process with pid 77677 00:17:19.722 22:17:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:19.722 22:17:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:19.722 22:17:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77677' 00:17:19.722 22:17:16 -- common/autotest_common.sh@955 -- # kill 77677 00:17:19.722 22:17:16 -- common/autotest_common.sh@960 -- # wait 77677 00:17:19.982 22:17:16 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:19.982 22:17:16 -- target/tls.sh@49 -- # local key hash crc 00:17:19.982 22:17:16 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:19.982 22:17:16 -- target/tls.sh@51 -- # hash=02 00:17:19.982 22:17:16 -- target/tls.sh@52 -- # gzip -1 -c 00:17:19.982 22:17:16 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:19.982 22:17:16 -- target/tls.sh@52 -- # head -c 4 00:17:19.982 22:17:16 -- target/tls.sh@52 -- # tail -c8 00:17:19.982 22:17:16 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:19.982 22:17:16 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:19.982 22:17:16 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:19.982 22:17:16 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:19.982 22:17:16 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:19.982 22:17:16 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.982 22:17:16 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:19.982 22:17:16 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.982 22:17:16 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:19.982 22:17:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:19.982 22:17:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:19.982 22:17:16 -- common/autotest_common.sh@10 -- # set +x 00:17:19.982 22:17:16 -- nvmf/common.sh@469 -- # nvmfpid=78390 00:17:19.982 22:17:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:19.982 22:17:16 -- nvmf/common.sh@470 -- # waitforlisten 78390 00:17:19.982 22:17:16 -- common/autotest_common.sh@829 -- # '[' -z 78390 ']' 00:17:19.982 22:17:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.982 22:17:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.982 22:17:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.982 22:17:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.982 22:17:16 -- common/autotest_common.sh@10 -- # set +x 00:17:19.982 [2024-11-17 22:17:16.482546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:19.982 [2024-11-17 22:17:16.482634] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.241 [2024-11-17 22:17:16.622977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.241 [2024-11-17 22:17:16.688711] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.241 [2024-11-17 22:17:16.688874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.241 [2024-11-17 22:17:16.688887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.241 [2024-11-17 22:17:16.688895] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.241 [2024-11-17 22:17:16.688925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.810 22:17:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.810 22:17:17 -- common/autotest_common.sh@862 -- # return 0 00:17:20.810 22:17:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:20.810 22:17:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.810 22:17:17 -- common/autotest_common.sh@10 -- # set +x 00:17:21.069 22:17:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.069 22:17:17 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:21.069 22:17:17 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:21.069 22:17:17 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:21.328 [2024-11-17 22:17:17.701677] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.328 22:17:17 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:21.328 22:17:17 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:21.587 [2024-11-17 22:17:18.113760] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:21.587 [2024-11-17 22:17:18.113994] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.587 22:17:18 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:21.846 malloc0 00:17:21.846 22:17:18 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:22.105 22:17:18 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.365 22:17:18 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.365 22:17:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:22.365 22:17:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:22.365 22:17:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:22.365 22:17:18 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:22.365 22:17:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:22.365 22:17:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:22.365 22:17:18 -- target/tls.sh@28 -- # bdevperf_pid=78494 00:17:22.365 22:17:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:22.365 22:17:18 -- target/tls.sh@31 -- # waitforlisten 78494 /var/tmp/bdevperf.sock 00:17:22.365 22:17:18 -- common/autotest_common.sh@829 -- # '[' -z 78494 ']' 00:17:22.365 22:17:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.365 22:17:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.365 22:17:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.365 22:17:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.365 22:17:18 -- common/autotest_common.sh@10 -- # set +x 00:17:22.625 [2024-11-17 22:17:19.010351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:22.625 [2024-11-17 22:17:19.010426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78494 ] 00:17:22.625 [2024-11-17 22:17:19.143022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.884 [2024-11-17 22:17:19.239721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.453 22:17:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.453 22:17:19 -- common/autotest_common.sh@862 -- # return 0 00:17:23.453 22:17:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.712 [2024-11-17 22:17:20.171266] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:23.712 TLSTESTn1 00:17:23.712 22:17:20 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:23.972 Running I/O for 10 seconds... 00:17:33.952 00:17:33.952 Latency(us) 00:17:33.952 [2024-11-17T22:17:30.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.952 [2024-11-17T22:17:30.567Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:33.952 Verification LBA range: start 0x0 length 0x2000 00:17:33.952 TLSTESTn1 : 10.01 6698.52 26.17 0.00 0.00 19080.06 3813.00 17158.52 00:17:33.952 [2024-11-17T22:17:30.567Z] =================================================================================================================== 00:17:33.952 [2024-11-17T22:17:30.567Z] Total : 6698.52 26.17 0.00 0.00 19080.06 3813.00 17158.52 00:17:33.952 0 00:17:33.952 22:17:30 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.952 22:17:30 -- target/tls.sh@45 -- # killprocess 78494 00:17:33.952 22:17:30 -- common/autotest_common.sh@936 -- # '[' -z 78494 ']' 00:17:33.952 22:17:30 -- common/autotest_common.sh@940 -- # kill -0 78494 00:17:33.953 22:17:30 -- common/autotest_common.sh@941 -- # uname 00:17:33.953 22:17:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.953 22:17:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78494 00:17:33.953 killing process with pid 78494 00:17:33.953 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.953 00:17:33.953 Latency(us) 00:17:33.953 [2024-11-17T22:17:30.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.953 [2024-11-17T22:17:30.568Z] =================================================================================================================== 00:17:33.953 [2024-11-17T22:17:30.568Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:33.953 22:17:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:33.953 22:17:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:33.953 22:17:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78494' 00:17:33.953 22:17:30 -- common/autotest_common.sh@955 -- # kill 78494 00:17:33.953 22:17:30 -- common/autotest_common.sh@960 -- # wait 78494 00:17:34.211 22:17:30 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:34.211 22:17:30 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:34.211 22:17:30 -- common/autotest_common.sh@650 -- # local es=0 00:17:34.211 22:17:30 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:34.211 22:17:30 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:34.211 22:17:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.211 22:17:30 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:34.211 22:17:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.211 22:17:30 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:34.211 22:17:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:34.211 22:17:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:34.211 22:17:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:34.211 22:17:30 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:34.211 22:17:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:34.211 22:17:30 -- target/tls.sh@28 -- # bdevperf_pid=78641 00:17:34.211 22:17:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:34.211 22:17:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:34.211 22:17:30 -- target/tls.sh@31 -- # waitforlisten 78641 /var/tmp/bdevperf.sock 00:17:34.211 22:17:30 -- common/autotest_common.sh@829 -- # '[' -z 78641 ']' 00:17:34.211 22:17:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.211 22:17:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.211 22:17:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.211 22:17:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.211 22:17:30 -- common/autotest_common.sh@10 -- # set +x 00:17:34.211 [2024-11-17 22:17:30.811548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:34.211 [2024-11-17 22:17:30.811656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78641 ] 00:17:34.469 [2024-11-17 22:17:30.943978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.469 [2024-11-17 22:17:31.033780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.403 22:17:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.403 22:17:31 -- common/autotest_common.sh@862 -- # return 0 00:17:35.403 22:17:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:35.403 [2024-11-17 22:17:31.978527] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:35.403 [2024-11-17 22:17:31.978586] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:35.403 2024/11/17 22:17:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:35.403 request: 00:17:35.403 { 00:17:35.403 "method": "bdev_nvme_attach_controller", 00:17:35.403 "params": { 00:17:35.403 "name": "TLSTEST", 00:17:35.403 "trtype": "tcp", 00:17:35.403 "traddr": "10.0.0.2", 00:17:35.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.403 "adrfam": "ipv4", 00:17:35.403 "trsvcid": "4420", 00:17:35.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.403 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:35.403 } 00:17:35.403 } 00:17:35.403 Got JSON-RPC error response 00:17:35.403 GoRPCClient: error on JSON-RPC call 00:17:35.403 22:17:31 -- target/tls.sh@36 -- # killprocess 78641 00:17:35.403 22:17:31 -- common/autotest_common.sh@936 -- # '[' -z 78641 ']' 00:17:35.403 22:17:31 -- common/autotest_common.sh@940 -- # kill -0 78641 00:17:35.403 22:17:31 -- common/autotest_common.sh@941 -- # uname 00:17:35.403 22:17:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:35.403 22:17:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78641 00:17:35.662 killing process with pid 78641 00:17:35.662 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.662 00:17:35.662 Latency(us) 00:17:35.662 [2024-11-17T22:17:32.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.662 [2024-11-17T22:17:32.277Z] =================================================================================================================== 00:17:35.662 [2024-11-17T22:17:32.277Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:35.662 22:17:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:35.662 22:17:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:35.662 22:17:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78641' 00:17:35.662 22:17:32 -- common/autotest_common.sh@955 -- # kill 78641 00:17:35.662 22:17:32 -- common/autotest_common.sh@960 -- # wait 78641 00:17:35.921 22:17:32 -- target/tls.sh@37 -- # return 1 00:17:35.921 22:17:32 -- common/autotest_common.sh@653 -- # es=1 00:17:35.921 22:17:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.921 22:17:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.921 22:17:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.921 22:17:32 -- target/tls.sh@183 -- # killprocess 78390 00:17:35.921 22:17:32 -- common/autotest_common.sh@936 -- # '[' -z 78390 ']' 00:17:35.921 22:17:32 -- common/autotest_common.sh@940 -- # kill -0 78390 00:17:35.921 22:17:32 -- common/autotest_common.sh@941 -- # uname 00:17:35.921 22:17:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:35.921 22:17:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78390 00:17:35.921 killing process with pid 78390 00:17:35.921 22:17:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:35.921 22:17:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:35.921 22:17:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78390' 00:17:35.921 22:17:32 -- common/autotest_common.sh@955 -- # kill 78390 00:17:35.921 22:17:32 -- common/autotest_common.sh@960 -- # wait 78390 00:17:36.180 22:17:32 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:36.180 22:17:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:36.180 22:17:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.180 22:17:32 -- common/autotest_common.sh@10 -- # set +x 00:17:36.180 22:17:32 -- nvmf/common.sh@469 -- # nvmfpid=78692 00:17:36.180 22:17:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.180 22:17:32 -- nvmf/common.sh@470 -- # waitforlisten 78692 00:17:36.180 22:17:32 -- common/autotest_common.sh@829 -- # '[' -z 78692 ']' 00:17:36.180 22:17:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.180 22:17:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.180 22:17:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.180 22:17:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.180 22:17:32 -- common/autotest_common.sh@10 -- # set +x 00:17:36.180 [2024-11-17 22:17:32.665281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.180 [2024-11-17 22:17:32.665382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.439 [2024-11-17 22:17:32.798461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.439 [2024-11-17 22:17:32.886288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:36.439 [2024-11-17 22:17:32.886426] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.439 [2024-11-17 22:17:32.886439] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.439 [2024-11-17 22:17:32.886447] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.439 [2024-11-17 22:17:32.886476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.375 22:17:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.375 22:17:33 -- common/autotest_common.sh@862 -- # return 0 00:17:37.375 22:17:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:37.375 22:17:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.375 22:17:33 -- common/autotest_common.sh@10 -- # set +x 00:17:37.375 22:17:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.375 22:17:33 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:37.375 22:17:33 -- common/autotest_common.sh@650 -- # local es=0 00:17:37.375 22:17:33 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:37.375 22:17:33 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:37.375 22:17:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.375 22:17:33 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:37.375 22:17:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.375 22:17:33 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:37.375 22:17:33 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:37.375 22:17:33 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:37.375 [2024-11-17 22:17:33.968036] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.633 22:17:33 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:37.633 22:17:34 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:37.892 [2024-11-17 22:17:34.416170] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:37.892 [2024-11-17 22:17:34.416358] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.892 22:17:34 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:38.150 malloc0 00:17:38.150 22:17:34 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:38.409 22:17:34 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.667 [2024-11-17 22:17:35.051221] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:38.667 [2024-11-17 22:17:35.051261] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:38.667 [2024-11-17 22:17:35.051278] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:38.667 2024/11/17 22:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:38.667 request: 00:17:38.667 { 00:17:38.667 "method": "nvmf_subsystem_add_host", 00:17:38.667 "params": { 00:17:38.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.667 "host": "nqn.2016-06.io.spdk:host1", 00:17:38.667 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:38.667 } 00:17:38.667 } 00:17:38.667 Got JSON-RPC error response 00:17:38.667 GoRPCClient: error on JSON-RPC call 00:17:38.667 22:17:35 -- common/autotest_common.sh@653 -- # es=1 00:17:38.667 22:17:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.667 22:17:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.667 22:17:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.667 22:17:35 -- target/tls.sh@189 -- # killprocess 78692 00:17:38.667 22:17:35 -- common/autotest_common.sh@936 -- # '[' -z 78692 ']' 00:17:38.667 22:17:35 -- common/autotest_common.sh@940 -- # kill -0 78692 00:17:38.667 22:17:35 -- common/autotest_common.sh@941 -- # uname 00:17:38.667 22:17:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.667 22:17:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78692 00:17:38.667 22:17:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:38.667 22:17:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:38.667 killing process with pid 78692 00:17:38.667 22:17:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78692' 00:17:38.667 22:17:35 -- common/autotest_common.sh@955 -- # kill 78692 00:17:38.667 22:17:35 -- common/autotest_common.sh@960 -- # wait 78692 00:17:38.926 22:17:35 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.926 22:17:35 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:38.926 22:17:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:38.926 22:17:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:38.926 22:17:35 -- common/autotest_common.sh@10 -- # set +x 00:17:38.926 22:17:35 -- nvmf/common.sh@469 -- # nvmfpid=78808 00:17:38.926 22:17:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:38.926 22:17:35 -- nvmf/common.sh@470 -- # waitforlisten 78808 00:17:38.926 22:17:35 -- common/autotest_common.sh@829 -- # '[' -z 78808 ']' 00:17:38.926 22:17:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.926 22:17:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.926 22:17:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.926 22:17:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.926 22:17:35 -- common/autotest_common.sh@10 -- # set +x 00:17:38.926 [2024-11-17 22:17:35.407901] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:38.926 [2024-11-17 22:17:35.407999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.185 [2024-11-17 22:17:35.546240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.185 [2024-11-17 22:17:35.623823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:39.185 [2024-11-17 22:17:35.623991] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.185 [2024-11-17 22:17:35.624002] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.185 [2024-11-17 22:17:35.624011] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.185 [2024-11-17 22:17:35.624043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.752 22:17:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.752 22:17:36 -- common/autotest_common.sh@862 -- # return 0 00:17:39.752 22:17:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:39.752 22:17:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.752 22:17:36 -- common/autotest_common.sh@10 -- # set +x 00:17:40.012 22:17:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.012 22:17:36 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:40.012 22:17:36 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:40.012 22:17:36 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:40.012 [2024-11-17 22:17:36.580369] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.012 22:17:36 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:40.271 22:17:36 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:40.529 [2024-11-17 22:17:37.060409] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.529 [2024-11-17 22:17:37.060618] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.529 22:17:37 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:40.788 malloc0 00:17:40.788 22:17:37 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:41.047 22:17:37 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.306 22:17:37 -- target/tls.sh@197 -- # bdevperf_pid=78906 00:17:41.306 22:17:37 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.306 22:17:37 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.306 22:17:37 -- target/tls.sh@200 -- # waitforlisten 78906 /var/tmp/bdevperf.sock 00:17:41.306 22:17:37 -- common/autotest_common.sh@829 -- # '[' -z 78906 ']' 00:17:41.306 22:17:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.306 22:17:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.306 22:17:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.306 22:17:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.306 22:17:37 -- common/autotest_common.sh@10 -- # set +x 00:17:41.306 [2024-11-17 22:17:37.864042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:41.306 [2024-11-17 22:17:37.864796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78906 ] 00:17:41.564 [2024-11-17 22:17:38.000721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.564 [2024-11-17 22:17:38.099920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.132 22:17:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.132 22:17:38 -- common/autotest_common.sh@862 -- # return 0 00:17:42.132 22:17:38 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.391 [2024-11-17 22:17:38.984564] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.650 TLSTESTn1 00:17:42.650 22:17:39 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:42.908 22:17:39 -- target/tls.sh@205 -- # tgtconf='{ 00:17:42.908 "subsystems": [ 00:17:42.908 { 00:17:42.908 "subsystem": "iobuf", 00:17:42.908 "config": [ 00:17:42.908 { 00:17:42.908 "method": "iobuf_set_options", 00:17:42.908 "params": { 00:17:42.908 "large_bufsize": 135168, 00:17:42.908 "large_pool_count": 1024, 00:17:42.908 "small_bufsize": 8192, 00:17:42.908 "small_pool_count": 8192 00:17:42.908 } 00:17:42.908 } 00:17:42.908 ] 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "subsystem": "sock", 00:17:42.908 "config": [ 00:17:42.908 { 00:17:42.908 "method": "sock_impl_set_options", 00:17:42.908 "params": { 00:17:42.908 "enable_ktls": false, 00:17:42.908 "enable_placement_id": 0, 00:17:42.908 "enable_quickack": false, 00:17:42.908 "enable_recv_pipe": true, 00:17:42.908 "enable_zerocopy_send_client": false, 00:17:42.908 "enable_zerocopy_send_server": true, 00:17:42.908 "impl_name": "posix", 00:17:42.908 "recv_buf_size": 2097152, 00:17:42.908 "send_buf_size": 2097152, 00:17:42.908 "tls_version": 0, 00:17:42.908 "zerocopy_threshold": 0 00:17:42.908 } 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "method": "sock_impl_set_options", 00:17:42.908 "params": { 00:17:42.908 "enable_ktls": false, 00:17:42.908 "enable_placement_id": 0, 00:17:42.908 "enable_quickack": false, 00:17:42.908 "enable_recv_pipe": true, 00:17:42.908 "enable_zerocopy_send_client": false, 00:17:42.908 "enable_zerocopy_send_server": true, 00:17:42.908 "impl_name": "ssl", 00:17:42.908 "recv_buf_size": 4096, 00:17:42.908 "send_buf_size": 4096, 00:17:42.908 "tls_version": 0, 00:17:42.908 "zerocopy_threshold": 0 00:17:42.908 } 00:17:42.908 } 00:17:42.908 ] 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "subsystem": "vmd", 00:17:42.908 "config": [] 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "subsystem": "accel", 00:17:42.908 "config": [ 00:17:42.908 { 00:17:42.908 "method": "accel_set_options", 00:17:42.908 "params": { 00:17:42.908 "buf_count": 2048, 00:17:42.908 "large_cache_size": 16, 00:17:42.908 "sequence_count": 2048, 00:17:42.908 "small_cache_size": 128, 00:17:42.908 "task_count": 2048 00:17:42.908 } 00:17:42.908 } 00:17:42.908 ] 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "subsystem": "bdev", 00:17:42.908 "config": [ 00:17:42.908 { 00:17:42.908 "method": "bdev_set_options", 00:17:42.908 "params": { 00:17:42.908 "bdev_auto_examine": true, 00:17:42.908 "bdev_io_cache_size": 256, 00:17:42.908 "bdev_io_pool_size": 65535, 00:17:42.908 "iobuf_large_cache_size": 16, 00:17:42.908 "iobuf_small_cache_size": 128 00:17:42.908 } 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "method": "bdev_raid_set_options", 00:17:42.908 "params": { 00:17:42.908 "process_window_size_kb": 1024 00:17:42.908 } 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "method": "bdev_iscsi_set_options", 00:17:42.908 "params": { 00:17:42.908 "timeout_sec": 30 00:17:42.908 } 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "method": "bdev_nvme_set_options", 00:17:42.908 "params": { 00:17:42.908 "action_on_timeout": "none", 00:17:42.908 "allow_accel_sequence": false, 00:17:42.908 "arbitration_burst": 0, 00:17:42.908 "bdev_retry_count": 3, 00:17:42.908 "ctrlr_loss_timeout_sec": 0, 00:17:42.908 "delay_cmd_submit": true, 00:17:42.908 "fast_io_fail_timeout_sec": 0, 00:17:42.908 "generate_uuids": false, 00:17:42.908 "high_priority_weight": 0, 00:17:42.908 "io_path_stat": false, 00:17:42.908 "io_queue_requests": 0, 00:17:42.908 "keep_alive_timeout_ms": 10000, 00:17:42.908 "low_priority_weight": 0, 00:17:42.908 "medium_priority_weight": 0, 00:17:42.908 "nvme_adminq_poll_period_us": 10000, 00:17:42.908 "nvme_ioq_poll_period_us": 0, 00:17:42.908 "reconnect_delay_sec": 0, 00:17:42.908 "timeout_admin_us": 0, 00:17:42.908 "timeout_us": 0, 00:17:42.908 "transport_ack_timeout": 0, 00:17:42.908 "transport_retry_count": 4, 00:17:42.908 "transport_tos": 0 00:17:42.908 } 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "method": "bdev_nvme_set_hotplug", 00:17:42.908 "params": { 00:17:42.908 "enable": false, 00:17:42.908 "period_us": 100000 00:17:42.908 } 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "method": "bdev_malloc_create", 00:17:42.908 "params": { 00:17:42.908 "block_size": 4096, 00:17:42.908 "name": "malloc0", 00:17:42.908 "num_blocks": 8192, 00:17:42.908 "optimal_io_boundary": 0, 00:17:42.908 "physical_block_size": 4096, 00:17:42.908 "uuid": "213d9c5e-86d8-4cd0-b4d6-e3b178e5ddd9" 00:17:42.908 } 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "method": "bdev_wait_for_examine" 00:17:42.908 } 00:17:42.908 ] 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "subsystem": "nbd", 00:17:42.908 "config": [] 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "subsystem": "scheduler", 00:17:42.908 "config": [ 00:17:42.908 { 00:17:42.908 "method": "framework_set_scheduler", 00:17:42.908 "params": { 00:17:42.908 "name": "static" 00:17:42.908 } 00:17:42.908 } 00:17:42.908 ] 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "subsystem": "nvmf", 00:17:42.908 "config": [ 00:17:42.908 { 00:17:42.908 "method": "nvmf_set_config", 00:17:42.908 "params": { 00:17:42.908 "admin_cmd_passthru": { 00:17:42.908 "identify_ctrlr": false 00:17:42.908 }, 00:17:42.908 "discovery_filter": "match_any" 00:17:42.908 } 00:17:42.908 }, 00:17:42.908 { 00:17:42.908 "method": "nvmf_set_max_subsystems", 00:17:42.908 "params": { 00:17:42.909 "max_subsystems": 1024 00:17:42.909 } 00:17:42.909 }, 00:17:42.909 { 00:17:42.909 "method": "nvmf_set_crdt", 00:17:42.909 "params": { 00:17:42.909 "crdt1": 0, 00:17:42.909 "crdt2": 0, 00:17:42.909 "crdt3": 0 00:17:42.909 } 00:17:42.909 }, 00:17:42.909 { 00:17:42.909 "method": "nvmf_create_transport", 00:17:42.909 "params": { 00:17:42.909 "abort_timeout_sec": 1, 00:17:42.909 "buf_cache_size": 4294967295, 00:17:42.909 "c2h_success": false, 00:17:42.909 "dif_insert_or_strip": false, 00:17:42.909 "in_capsule_data_size": 4096, 00:17:42.909 "io_unit_size": 131072, 00:17:42.909 "max_aq_depth": 128, 00:17:42.909 "max_io_qpairs_per_ctrlr": 127, 00:17:42.909 "max_io_size": 131072, 00:17:42.909 "max_queue_depth": 128, 00:17:42.909 "num_shared_buffers": 511, 00:17:42.909 "sock_priority": 0, 00:17:42.909 "trtype": "TCP", 00:17:42.909 "zcopy": false 00:17:42.909 } 00:17:42.909 }, 00:17:42.909 { 00:17:42.909 "method": "nvmf_create_subsystem", 00:17:42.909 "params": { 00:17:42.909 "allow_any_host": false, 00:17:42.909 "ana_reporting": false, 00:17:42.909 "max_cntlid": 65519, 00:17:42.909 "max_namespaces": 10, 00:17:42.909 "min_cntlid": 1, 00:17:42.909 "model_number": "SPDK bdev Controller", 00:17:42.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.909 "serial_number": "SPDK00000000000001" 00:17:42.909 } 00:17:42.909 }, 00:17:42.909 { 00:17:42.909 "method": "nvmf_subsystem_add_host", 00:17:42.909 "params": { 00:17:42.909 "host": "nqn.2016-06.io.spdk:host1", 00:17:42.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.909 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:42.909 } 00:17:42.909 }, 00:17:42.909 { 00:17:42.909 "method": "nvmf_subsystem_add_ns", 00:17:42.909 "params": { 00:17:42.909 "namespace": { 00:17:42.909 "bdev_name": "malloc0", 00:17:42.909 "nguid": "213D9C5E86D84CD0B4D6E3B178E5DDD9", 00:17:42.909 "nsid": 1, 00:17:42.909 "uuid": "213d9c5e-86d8-4cd0-b4d6-e3b178e5ddd9" 00:17:42.909 }, 00:17:42.909 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:42.909 } 00:17:42.909 }, 00:17:42.909 { 00:17:42.909 "method": "nvmf_subsystem_add_listener", 00:17:42.909 "params": { 00:17:42.909 "listen_address": { 00:17:42.909 "adrfam": "IPv4", 00:17:42.909 "traddr": "10.0.0.2", 00:17:42.909 "trsvcid": "4420", 00:17:42.909 "trtype": "TCP" 00:17:42.909 }, 00:17:42.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.909 "secure_channel": true 00:17:42.909 } 00:17:42.909 } 00:17:42.909 ] 00:17:42.909 } 00:17:42.909 ] 00:17:42.909 }' 00:17:42.909 22:17:39 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:43.168 22:17:39 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:43.168 "subsystems": [ 00:17:43.168 { 00:17:43.168 "subsystem": "iobuf", 00:17:43.168 "config": [ 00:17:43.168 { 00:17:43.168 "method": "iobuf_set_options", 00:17:43.168 "params": { 00:17:43.168 "large_bufsize": 135168, 00:17:43.168 "large_pool_count": 1024, 00:17:43.168 "small_bufsize": 8192, 00:17:43.168 "small_pool_count": 8192 00:17:43.168 } 00:17:43.168 } 00:17:43.168 ] 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "subsystem": "sock", 00:17:43.168 "config": [ 00:17:43.168 { 00:17:43.168 "method": "sock_impl_set_options", 00:17:43.168 "params": { 00:17:43.168 "enable_ktls": false, 00:17:43.168 "enable_placement_id": 0, 00:17:43.168 "enable_quickack": false, 00:17:43.168 "enable_recv_pipe": true, 00:17:43.168 "enable_zerocopy_send_client": false, 00:17:43.168 "enable_zerocopy_send_server": true, 00:17:43.168 "impl_name": "posix", 00:17:43.168 "recv_buf_size": 2097152, 00:17:43.168 "send_buf_size": 2097152, 00:17:43.168 "tls_version": 0, 00:17:43.168 "zerocopy_threshold": 0 00:17:43.168 } 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "method": "sock_impl_set_options", 00:17:43.168 "params": { 00:17:43.168 "enable_ktls": false, 00:17:43.168 "enable_placement_id": 0, 00:17:43.168 "enable_quickack": false, 00:17:43.168 "enable_recv_pipe": true, 00:17:43.168 "enable_zerocopy_send_client": false, 00:17:43.168 "enable_zerocopy_send_server": true, 00:17:43.168 "impl_name": "ssl", 00:17:43.168 "recv_buf_size": 4096, 00:17:43.168 "send_buf_size": 4096, 00:17:43.168 "tls_version": 0, 00:17:43.168 "zerocopy_threshold": 0 00:17:43.168 } 00:17:43.168 } 00:17:43.168 ] 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "subsystem": "vmd", 00:17:43.168 "config": [] 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "subsystem": "accel", 00:17:43.168 "config": [ 00:17:43.168 { 00:17:43.168 "method": "accel_set_options", 00:17:43.168 "params": { 00:17:43.168 "buf_count": 2048, 00:17:43.168 "large_cache_size": 16, 00:17:43.168 "sequence_count": 2048, 00:17:43.168 "small_cache_size": 128, 00:17:43.168 "task_count": 2048 00:17:43.168 } 00:17:43.168 } 00:17:43.168 ] 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "subsystem": "bdev", 00:17:43.168 "config": [ 00:17:43.168 { 00:17:43.168 "method": "bdev_set_options", 00:17:43.168 "params": { 00:17:43.168 "bdev_auto_examine": true, 00:17:43.168 "bdev_io_cache_size": 256, 00:17:43.168 "bdev_io_pool_size": 65535, 00:17:43.168 "iobuf_large_cache_size": 16, 00:17:43.168 "iobuf_small_cache_size": 128 00:17:43.168 } 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "method": "bdev_raid_set_options", 00:17:43.168 "params": { 00:17:43.168 "process_window_size_kb": 1024 00:17:43.168 } 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "method": "bdev_iscsi_set_options", 00:17:43.168 "params": { 00:17:43.168 "timeout_sec": 30 00:17:43.168 } 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "method": "bdev_nvme_set_options", 00:17:43.168 "params": { 00:17:43.168 "action_on_timeout": "none", 00:17:43.168 "allow_accel_sequence": false, 00:17:43.168 "arbitration_burst": 0, 00:17:43.168 "bdev_retry_count": 3, 00:17:43.168 "ctrlr_loss_timeout_sec": 0, 00:17:43.168 "delay_cmd_submit": true, 00:17:43.168 "fast_io_fail_timeout_sec": 0, 00:17:43.168 "generate_uuids": false, 00:17:43.168 "high_priority_weight": 0, 00:17:43.168 "io_path_stat": false, 00:17:43.168 "io_queue_requests": 512, 00:17:43.168 "keep_alive_timeout_ms": 10000, 00:17:43.168 "low_priority_weight": 0, 00:17:43.168 "medium_priority_weight": 0, 00:17:43.168 "nvme_adminq_poll_period_us": 10000, 00:17:43.168 "nvme_ioq_poll_period_us": 0, 00:17:43.168 "reconnect_delay_sec": 0, 00:17:43.168 "timeout_admin_us": 0, 00:17:43.168 "timeout_us": 0, 00:17:43.168 "transport_ack_timeout": 0, 00:17:43.168 "transport_retry_count": 4, 00:17:43.168 "transport_tos": 0 00:17:43.168 } 00:17:43.168 }, 00:17:43.168 { 00:17:43.168 "method": "bdev_nvme_attach_controller", 00:17:43.168 "params": { 00:17:43.168 "adrfam": "IPv4", 00:17:43.168 "ctrlr_loss_timeout_sec": 0, 00:17:43.168 "ddgst": false, 00:17:43.168 "fast_io_fail_timeout_sec": 0, 00:17:43.168 "hdgst": false, 00:17:43.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.168 "name": "TLSTEST", 00:17:43.168 "prchk_guard": false, 00:17:43.168 "prchk_reftag": false, 00:17:43.168 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:43.168 "reconnect_delay_sec": 0, 00:17:43.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.168 "traddr": "10.0.0.2", 00:17:43.168 "trsvcid": "4420", 00:17:43.168 "trtype": "TCP" 00:17:43.168 } 00:17:43.169 }, 00:17:43.169 { 00:17:43.169 "method": "bdev_nvme_set_hotplug", 00:17:43.169 "params": { 00:17:43.169 "enable": false, 00:17:43.169 "period_us": 100000 00:17:43.169 } 00:17:43.169 }, 00:17:43.169 { 00:17:43.169 "method": "bdev_wait_for_examine" 00:17:43.169 } 00:17:43.169 ] 00:17:43.169 }, 00:17:43.169 { 00:17:43.169 "subsystem": "nbd", 00:17:43.169 "config": [] 00:17:43.169 } 00:17:43.169 ] 00:17:43.169 }' 00:17:43.169 22:17:39 -- target/tls.sh@208 -- # killprocess 78906 00:17:43.169 22:17:39 -- common/autotest_common.sh@936 -- # '[' -z 78906 ']' 00:17:43.169 22:17:39 -- common/autotest_common.sh@940 -- # kill -0 78906 00:17:43.169 22:17:39 -- common/autotest_common.sh@941 -- # uname 00:17:43.169 22:17:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.169 22:17:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78906 00:17:43.169 22:17:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:43.169 killing process with pid 78906 00:17:43.169 22:17:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:43.169 22:17:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78906' 00:17:43.169 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.169 00:17:43.169 Latency(us) 00:17:43.169 [2024-11-17T22:17:39.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.169 [2024-11-17T22:17:39.784Z] =================================================================================================================== 00:17:43.169 [2024-11-17T22:17:39.784Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.169 22:17:39 -- common/autotest_common.sh@955 -- # kill 78906 00:17:43.169 22:17:39 -- common/autotest_common.sh@960 -- # wait 78906 00:17:43.428 22:17:39 -- target/tls.sh@209 -- # killprocess 78808 00:17:43.428 22:17:39 -- common/autotest_common.sh@936 -- # '[' -z 78808 ']' 00:17:43.428 22:17:39 -- common/autotest_common.sh@940 -- # kill -0 78808 00:17:43.428 22:17:39 -- common/autotest_common.sh@941 -- # uname 00:17:43.428 22:17:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.428 22:17:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78808 00:17:43.428 22:17:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:43.428 killing process with pid 78808 00:17:43.428 22:17:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:43.428 22:17:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78808' 00:17:43.428 22:17:39 -- common/autotest_common.sh@955 -- # kill 78808 00:17:43.428 22:17:39 -- common/autotest_common.sh@960 -- # wait 78808 00:17:43.687 22:17:40 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:43.687 22:17:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:43.687 22:17:40 -- target/tls.sh@212 -- # echo '{ 00:17:43.687 "subsystems": [ 00:17:43.687 { 00:17:43.687 "subsystem": "iobuf", 00:17:43.687 "config": [ 00:17:43.687 { 00:17:43.687 "method": "iobuf_set_options", 00:17:43.687 "params": { 00:17:43.687 "large_bufsize": 135168, 00:17:43.687 "large_pool_count": 1024, 00:17:43.687 "small_bufsize": 8192, 00:17:43.687 "small_pool_count": 8192 00:17:43.687 } 00:17:43.687 } 00:17:43.687 ] 00:17:43.687 }, 00:17:43.687 { 00:17:43.687 "subsystem": "sock", 00:17:43.687 "config": [ 00:17:43.687 { 00:17:43.687 "method": "sock_impl_set_options", 00:17:43.687 "params": { 00:17:43.687 "enable_ktls": false, 00:17:43.687 "enable_placement_id": 0, 00:17:43.687 "enable_quickack": false, 00:17:43.687 "enable_recv_pipe": true, 00:17:43.687 "enable_zerocopy_send_client": false, 00:17:43.687 "enable_zerocopy_send_server": true, 00:17:43.687 "impl_name": "posix", 00:17:43.687 "recv_buf_size": 2097152, 00:17:43.687 "send_buf_size": 2097152, 00:17:43.687 "tls_version": 0, 00:17:43.687 "zerocopy_threshold": 0 00:17:43.687 } 00:17:43.687 }, 00:17:43.687 { 00:17:43.687 "method": "sock_impl_set_options", 00:17:43.687 "params": { 00:17:43.687 "enable_ktls": false, 00:17:43.687 "enable_placement_id": 0, 00:17:43.687 "enable_quickack": false, 00:17:43.687 "enable_recv_pipe": true, 00:17:43.687 "enable_zerocopy_send_client": false, 00:17:43.687 "enable_zerocopy_send_server": true, 00:17:43.687 "impl_name": "ssl", 00:17:43.687 "recv_buf_size": 4096, 00:17:43.687 "send_buf_size": 4096, 00:17:43.687 "tls_version": 0, 00:17:43.687 "zerocopy_threshold": 0 00:17:43.687 } 00:17:43.687 } 00:17:43.687 ] 00:17:43.687 }, 00:17:43.687 { 00:17:43.687 "subsystem": "vmd", 00:17:43.687 "config": [] 00:17:43.687 }, 00:17:43.687 { 00:17:43.687 "subsystem": "accel", 00:17:43.687 "config": [ 00:17:43.687 { 00:17:43.687 "method": "accel_set_options", 00:17:43.687 "params": { 00:17:43.687 "buf_count": 2048, 00:17:43.687 "large_cache_size": 16, 00:17:43.687 "sequence_count": 2048, 00:17:43.687 "small_cache_size": 128, 00:17:43.687 "task_count": 2048 00:17:43.687 } 00:17:43.687 } 00:17:43.687 ] 00:17:43.687 }, 00:17:43.687 { 00:17:43.687 "subsystem": "bdev", 00:17:43.687 "config": [ 00:17:43.687 { 00:17:43.687 "method": "bdev_set_options", 00:17:43.687 "params": { 00:17:43.687 "bdev_auto_examine": true, 00:17:43.687 "bdev_io_cache_size": 256, 00:17:43.687 "bdev_io_pool_size": 65535, 00:17:43.687 "iobuf_large_cache_size": 16, 00:17:43.687 "iobuf_small_cache_size": 128 00:17:43.687 } 00:17:43.687 }, 00:17:43.687 { 00:17:43.687 "method": "bdev_raid_set_options", 00:17:43.687 "params": { 00:17:43.687 "process_window_size_kb": 1024 00:17:43.687 } 00:17:43.687 }, 00:17:43.687 { 00:17:43.687 "method": "bdev_iscsi_set_options", 00:17:43.687 "params": { 00:17:43.687 "timeout_sec": 30 00:17:43.687 } 00:17:43.687 }, 00:17:43.687 { 00:17:43.687 "method": "bdev_nvme_set_options", 00:17:43.687 "params": { 00:17:43.687 "action_on_timeout": "none", 00:17:43.687 "allow_accel_sequence": false, 00:17:43.687 "arbitration_burst": 0, 00:17:43.687 "bdev_retry_count": 3, 00:17:43.687 "ctrlr_loss_timeout_sec": 0, 00:17:43.687 "delay_cmd_submit": true, 00:17:43.687 "fast_io_fail_timeout_sec": 0, 00:17:43.687 "generate_uuids": false, 00:17:43.687 "high_priority_weight": 0, 00:17:43.687 "io_path_stat": false, 00:17:43.688 "io_queue_requests": 0, 00:17:43.688 "keep_alive_timeout_ms": 10000, 00:17:43.688 "low_priority_weight": 0, 00:17:43.688 "medium_priority_weight": 0, 00:17:43.688 "nvme_adminq_poll_period_us": 10000, 00:17:43.688 "nvme_ioq_poll_period_us": 0, 00:17:43.688 "reconnect_delay_sec": 0, 00:17:43.688 "timeout_admin_us": 0, 00:17:43.688 "timeout_us": 0, 00:17:43.688 "transport_ack_timeout": 0, 00:17:43.688 "transport_retry_count": 4, 00:17:43.688 "transport_tos": 0 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "bdev_nvme_set_hotplug", 00:17:43.688 "params": { 00:17:43.688 "enable": false, 00:17:43.688 "period_us": 100000 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "bdev_malloc_create", 00:17:43.688 "params": { 00:17:43.688 "block_size": 4096, 00:17:43.688 "name": "malloc0", 00:17:43.688 "num_blocks": 8192, 00:17:43.688 "optimal_io_boundary": 0, 00:17:43.688 "physical_block_size": 4096, 00:17:43.688 "uuid": "213d9c5e-86d8-4cd0-b4d6-e3b178e5ddd9" 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "bdev_wait_for_examine" 00:17:43.688 } 00:17:43.688 ] 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "subsystem": "nbd", 00:17:43.688 "config": [] 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "subsystem": "scheduler", 00:17:43.688 "config": [ 00:17:43.688 { 00:17:43.688 "method": "framework_set_scheduler", 00:17:43.688 "params": { 00:17:43.688 "name": "static" 00:17:43.688 } 00:17:43.688 } 00:17:43.688 ] 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "subsystem": "nvmf", 00:17:43.688 "config": [ 00:17:43.688 { 00:17:43.688 "method": "nvmf_set_config", 00:17:43.688 "params": { 00:17:43.688 "admin_cmd_passthru": { 00:17:43.688 "identify_ctrlr": false 00:17:43.688 }, 00:17:43.688 "discovery_filter": "match_any" 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "nvmf_set_max_subsystems", 00:17:43.688 "params": { 00:17:43.688 "max_subsystems": 1024 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "nvmf_set_crdt", 00:17:43.688 "params": { 00:17:43.688 "crdt1": 0, 00:17:43.688 "crdt2": 0, 00:17:43.688 "crdt3": 0 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "nvmf_create_transport", 00:17:43.688 "params": { 00:17:43.688 "abort_timeout_sec": 1, 00:17:43.688 "buf_cache_size": 4294967295, 00:17:43.688 "c2h_success": false, 00:17:43.688 "dif_insert_or_strip": false, 00:17:43.688 "in_capsule_data_size": 4096, 00:17:43.688 "io_unit_size": 131072, 00:17:43.688 "max_aq_depth": 128, 00:17:43.688 "max_io_qpairs_per_ctrlr": 127, 00:17:43.688 "max_io_size": 131072, 00:17:43.688 "max_queue_depth": 128, 00:17:43.688 "num_shared_buffers": 511, 00:17:43.688 "sock_priority": 0, 00:17:43.688 "trtype": "TCP", 00:17:43.688 "zcopy": false 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "nvmf_create_subsystem", 00:17:43.688 "params": { 00:17:43.688 "allow_any_host": false, 00:17:43.688 "ana_reporting": false, 00:17:43.688 "max_cntlid": 65519, 00:17:43.688 "max_namespaces": 10, 00:17:43.688 "min_cntlid": 1, 00:17:43.688 "model_number": "SPDK bdev Controller", 00:17:43.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.688 "serial_number": "SPDK00000000000001" 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "nvmf_subsystem_add_host", 00:17:43.688 "params": { 00:17:43.688 "host": "nqn.2016-06.io.spdk:host1", 00:17:43.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.688 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "nvmf_subsystem_add_ns", 00:17:43.688 "params": { 00:17:43.688 "namespace": { 00:17:43.688 "bdev_name": "malloc0", 00:17:43.688 "nguid": "213D9C5E86D84CD0B4D6E3B178E5DDD9", 00:17:43.688 "nsid": 1, 00:17:43.688 "uuid": "213d9c5e-86d8-4cd0-b4d6-e3b178e5ddd9" 00:17:43.688 }, 00:17:43.688 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:43.688 } 00:17:43.688 }, 00:17:43.688 { 00:17:43.688 "method": "nvmf_subsystem_add_listener", 00:17:43.688 "params": { 00:17:43.688 "listen_address": { 00:17:43.688 "adrfam": "IPv4", 00:17:43.688 "traddr": "10.0.0.2", 00:17:43.688 "trsvcid": "4420", 00:17:43.688 "trtype": "TCP" 00:17:43.688 }, 00:17:43.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.688 "secure_channel": true 00:17:43.688 } 00:17:43.688 } 00:17:43.688 ] 00:17:43.688 } 00:17:43.688 ] 00:17:43.688 }' 00:17:43.688 22:17:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:43.688 22:17:40 -- common/autotest_common.sh@10 -- # set +x 00:17:43.688 22:17:40 -- nvmf/common.sh@469 -- # nvmfpid=78979 00:17:43.688 22:17:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:43.688 22:17:40 -- nvmf/common.sh@470 -- # waitforlisten 78979 00:17:43.688 22:17:40 -- common/autotest_common.sh@829 -- # '[' -z 78979 ']' 00:17:43.688 22:17:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.688 22:17:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.688 22:17:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.688 22:17:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.688 22:17:40 -- common/autotest_common.sh@10 -- # set +x 00:17:43.688 [2024-11-17 22:17:40.290741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:43.688 [2024-11-17 22:17:40.290841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.947 [2024-11-17 22:17:40.423555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.947 [2024-11-17 22:17:40.504906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:43.947 [2024-11-17 22:17:40.505049] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.947 [2024-11-17 22:17:40.505060] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.947 [2024-11-17 22:17:40.505068] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.947 [2024-11-17 22:17:40.505100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.206 [2024-11-17 22:17:40.753382] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.206 [2024-11-17 22:17:40.785343] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:44.206 [2024-11-17 22:17:40.785566] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.774 22:17:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.774 22:17:41 -- common/autotest_common.sh@862 -- # return 0 00:17:44.774 22:17:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:44.774 22:17:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:44.774 22:17:41 -- common/autotest_common.sh@10 -- # set +x 00:17:44.774 22:17:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.774 22:17:41 -- target/tls.sh@216 -- # bdevperf_pid=79023 00:17:44.774 22:17:41 -- target/tls.sh@217 -- # waitforlisten 79023 /var/tmp/bdevperf.sock 00:17:44.774 22:17:41 -- common/autotest_common.sh@829 -- # '[' -z 79023 ']' 00:17:44.774 22:17:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.774 22:17:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.774 22:17:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.774 22:17:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.774 22:17:41 -- common/autotest_common.sh@10 -- # set +x 00:17:44.774 22:17:41 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:44.774 22:17:41 -- target/tls.sh@213 -- # echo '{ 00:17:44.774 "subsystems": [ 00:17:44.774 { 00:17:44.774 "subsystem": "iobuf", 00:17:44.774 "config": [ 00:17:44.774 { 00:17:44.774 "method": "iobuf_set_options", 00:17:44.774 "params": { 00:17:44.774 "large_bufsize": 135168, 00:17:44.774 "large_pool_count": 1024, 00:17:44.774 "small_bufsize": 8192, 00:17:44.774 "small_pool_count": 8192 00:17:44.774 } 00:17:44.774 } 00:17:44.774 ] 00:17:44.774 }, 00:17:44.774 { 00:17:44.774 "subsystem": "sock", 00:17:44.774 "config": [ 00:17:44.774 { 00:17:44.774 "method": "sock_impl_set_options", 00:17:44.774 "params": { 00:17:44.774 "enable_ktls": false, 00:17:44.774 "enable_placement_id": 0, 00:17:44.774 "enable_quickack": false, 00:17:44.774 "enable_recv_pipe": true, 00:17:44.774 "enable_zerocopy_send_client": false, 00:17:44.774 "enable_zerocopy_send_server": true, 00:17:44.774 "impl_name": "posix", 00:17:44.774 "recv_buf_size": 2097152, 00:17:44.774 "send_buf_size": 2097152, 00:17:44.774 "tls_version": 0, 00:17:44.774 "zerocopy_threshold": 0 00:17:44.774 } 00:17:44.774 }, 00:17:44.774 { 00:17:44.774 "method": "sock_impl_set_options", 00:17:44.774 "params": { 00:17:44.774 "enable_ktls": false, 00:17:44.774 "enable_placement_id": 0, 00:17:44.774 "enable_quickack": false, 00:17:44.774 "enable_recv_pipe": true, 00:17:44.774 "enable_zerocopy_send_client": false, 00:17:44.774 "enable_zerocopy_send_server": true, 00:17:44.774 "impl_name": "ssl", 00:17:44.774 "recv_buf_size": 4096, 00:17:44.774 "send_buf_size": 4096, 00:17:44.774 "tls_version": 0, 00:17:44.774 "zerocopy_threshold": 0 00:17:44.774 } 00:17:44.774 } 00:17:44.774 ] 00:17:44.774 }, 00:17:44.774 { 00:17:44.774 "subsystem": "vmd", 00:17:44.774 "config": [] 00:17:44.774 }, 00:17:44.774 { 00:17:44.774 "subsystem": "accel", 00:17:44.774 "config": [ 00:17:44.774 { 00:17:44.774 "method": "accel_set_options", 00:17:44.774 "params": { 00:17:44.774 "buf_count": 2048, 00:17:44.774 "large_cache_size": 16, 00:17:44.774 "sequence_count": 2048, 00:17:44.774 "small_cache_size": 128, 00:17:44.774 "task_count": 2048 00:17:44.774 } 00:17:44.774 } 00:17:44.774 ] 00:17:44.774 }, 00:17:44.774 { 00:17:44.774 "subsystem": "bdev", 00:17:44.774 "config": [ 00:17:44.774 { 00:17:44.774 "method": "bdev_set_options", 00:17:44.774 "params": { 00:17:44.774 "bdev_auto_examine": true, 00:17:44.774 "bdev_io_cache_size": 256, 00:17:44.774 "bdev_io_pool_size": 65535, 00:17:44.774 "iobuf_large_cache_size": 16, 00:17:44.775 "iobuf_small_cache_size": 128 00:17:44.775 } 00:17:44.775 }, 00:17:44.775 { 00:17:44.775 "method": "bdev_raid_set_options", 00:17:44.775 "params": { 00:17:44.775 "process_window_size_kb": 1024 00:17:44.775 } 00:17:44.775 }, 00:17:44.775 { 00:17:44.775 "method": "bdev_iscsi_set_options", 00:17:44.775 "params": { 00:17:44.775 "timeout_sec": 30 00:17:44.775 } 00:17:44.775 }, 00:17:44.775 { 00:17:44.775 "method": "bdev_nvme_set_options", 00:17:44.775 "params": { 00:17:44.775 "action_on_timeout": "none", 00:17:44.775 "allow_accel_sequence": false, 00:17:44.775 "arbitration_burst": 0, 00:17:44.775 "bdev_retry_count": 3, 00:17:44.775 "ctrlr_loss_timeout_sec": 0, 00:17:44.775 "delay_cmd_submit": true, 00:17:44.775 "fast_io_fail_timeout_sec": 0, 00:17:44.775 "generate_uuids": false, 00:17:44.775 "high_priority_weight": 0, 00:17:44.775 "io_path_stat": false, 00:17:44.775 "io_queue_requests": 512, 00:17:44.775 "keep_alive_timeout_ms": 10000, 00:17:44.775 "low_priority_weight": 0, 00:17:44.775 "medium_priority_weight": 0, 00:17:44.775 "nvme_adminq_poll_period_us": 10000, 00:17:44.775 "nvme_ioq_poll_period_us": 0, 00:17:44.775 "reconnect_delay_sec": 0, 00:17:44.775 "timeout_admin_us": 0, 00:17:44.775 "timeout_us": 0, 00:17:44.775 "transport_ack_timeout": 0, 00:17:44.775 "transport_retry_count": 4, 00:17:44.775 "transport_tos": 0 00:17:44.775 } 00:17:44.775 }, 00:17:44.775 { 00:17:44.775 "method": "bdev_nvme_attach_controller", 00:17:44.775 "params": { 00:17:44.775 "adrfam": "IPv4", 00:17:44.775 "ctrlr_loss_timeout_sec": 0, 00:17:44.775 "ddgst": false, 00:17:44.775 "fast_io_fail_timeout_sec": 0, 00:17:44.775 "hdgst": false, 00:17:44.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.775 "name": "TLSTEST", 00:17:44.775 "prchk_guard": false, 00:17:44.775 "prchk_reftag": false, 00:17:44.775 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:44.775 "reconnect_delay_sec": 0, 00:17:44.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.775 "traddr": "10.0.0.2", 00:17:44.775 "trsvcid": "4420", 00:17:44.775 "trtype": "TCP" 00:17:44.775 } 00:17:44.775 }, 00:17:44.775 { 00:17:44.775 "method": "bdev_nvme_set_hotplug", 00:17:44.775 "params": { 00:17:44.775 "enable": false, 00:17:44.775 "period_us": 100000 00:17:44.775 } 00:17:44.775 }, 00:17:44.775 { 00:17:44.775 "method": "bdev_wait_for_examine" 00:17:44.775 } 00:17:44.775 ] 00:17:44.775 }, 00:17:44.775 { 00:17:44.775 "subsystem": "nbd", 00:17:44.775 "config": [] 00:17:44.775 } 00:17:44.775 ] 00:17:44.775 }' 00:17:44.775 [2024-11-17 22:17:41.305675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:44.775 [2024-11-17 22:17:41.305795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79023 ] 00:17:45.033 [2024-11-17 22:17:41.446816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.033 [2024-11-17 22:17:41.538867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.293 [2024-11-17 22:17:41.694020] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.873 22:17:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.873 22:17:42 -- common/autotest_common.sh@862 -- # return 0 00:17:45.873 22:17:42 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:45.873 Running I/O for 10 seconds... 00:17:55.894 00:17:55.894 Latency(us) 00:17:55.894 [2024-11-17T22:17:52.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.894 [2024-11-17T22:17:52.509Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.894 Verification LBA range: start 0x0 length 0x2000 00:17:55.894 TLSTESTn1 : 10.01 6431.01 25.12 0.00 0.00 19872.96 5034.36 23354.65 00:17:55.894 [2024-11-17T22:17:52.509Z] =================================================================================================================== 00:17:55.894 [2024-11-17T22:17:52.509Z] Total : 6431.01 25.12 0.00 0.00 19872.96 5034.36 23354.65 00:17:55.894 0 00:17:55.894 22:17:52 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.894 22:17:52 -- target/tls.sh@223 -- # killprocess 79023 00:17:55.894 22:17:52 -- common/autotest_common.sh@936 -- # '[' -z 79023 ']' 00:17:55.894 22:17:52 -- common/autotest_common.sh@940 -- # kill -0 79023 00:17:55.894 22:17:52 -- common/autotest_common.sh@941 -- # uname 00:17:55.894 22:17:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:55.895 22:17:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79023 00:17:55.895 22:17:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:55.895 killing process with pid 79023 00:17:55.895 22:17:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:55.895 22:17:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79023' 00:17:55.895 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.895 00:17:55.895 Latency(us) 00:17:55.895 [2024-11-17T22:17:52.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.895 [2024-11-17T22:17:52.510Z] =================================================================================================================== 00:17:55.895 [2024-11-17T22:17:52.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.895 22:17:52 -- common/autotest_common.sh@955 -- # kill 79023 00:17:55.895 22:17:52 -- common/autotest_common.sh@960 -- # wait 79023 00:17:56.155 22:17:52 -- target/tls.sh@224 -- # killprocess 78979 00:17:56.155 22:17:52 -- common/autotest_common.sh@936 -- # '[' -z 78979 ']' 00:17:56.155 22:17:52 -- common/autotest_common.sh@940 -- # kill -0 78979 00:17:56.155 22:17:52 -- common/autotest_common.sh@941 -- # uname 00:17:56.155 22:17:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.155 22:17:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78979 00:17:56.155 22:17:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:56.155 killing process with pid 78979 00:17:56.155 22:17:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:56.155 22:17:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78979' 00:17:56.155 22:17:52 -- common/autotest_common.sh@955 -- # kill 78979 00:17:56.155 22:17:52 -- common/autotest_common.sh@960 -- # wait 78979 00:17:56.414 22:17:52 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:56.414 22:17:52 -- target/tls.sh@227 -- # cleanup 00:17:56.414 22:17:52 -- target/tls.sh@15 -- # process_shm --id 0 00:17:56.414 22:17:52 -- common/autotest_common.sh@806 -- # type=--id 00:17:56.414 22:17:52 -- common/autotest_common.sh@807 -- # id=0 00:17:56.414 22:17:52 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:56.414 22:17:52 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:56.414 22:17:52 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:56.414 22:17:52 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:56.414 22:17:52 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:56.414 22:17:52 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:56.414 nvmf_trace.0 00:17:56.414 22:17:52 -- common/autotest_common.sh@821 -- # return 0 00:17:56.414 22:17:52 -- target/tls.sh@16 -- # killprocess 79023 00:17:56.414 22:17:52 -- common/autotest_common.sh@936 -- # '[' -z 79023 ']' 00:17:56.414 22:17:52 -- common/autotest_common.sh@940 -- # kill -0 79023 00:17:56.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79023) - No such process 00:17:56.414 Process with pid 79023 is not found 00:17:56.414 22:17:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79023 is not found' 00:17:56.414 22:17:52 -- target/tls.sh@17 -- # nvmftestfini 00:17:56.414 22:17:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:56.414 22:17:52 -- nvmf/common.sh@116 -- # sync 00:17:56.673 22:17:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:56.673 22:17:53 -- nvmf/common.sh@119 -- # set +e 00:17:56.673 22:17:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:56.673 22:17:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:56.673 rmmod nvme_tcp 00:17:56.673 rmmod nvme_fabrics 00:17:56.673 rmmod nvme_keyring 00:17:56.673 22:17:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:56.673 22:17:53 -- nvmf/common.sh@123 -- # set -e 00:17:56.673 22:17:53 -- nvmf/common.sh@124 -- # return 0 00:17:56.673 22:17:53 -- nvmf/common.sh@477 -- # '[' -n 78979 ']' 00:17:56.673 22:17:53 -- nvmf/common.sh@478 -- # killprocess 78979 00:17:56.673 22:17:53 -- common/autotest_common.sh@936 -- # '[' -z 78979 ']' 00:17:56.673 22:17:53 -- common/autotest_common.sh@940 -- # kill -0 78979 00:17:56.673 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (78979) - No such process 00:17:56.673 Process with pid 78979 is not found 00:17:56.673 22:17:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 78979 is not found' 00:17:56.673 22:17:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:56.673 22:17:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:56.673 22:17:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:56.673 22:17:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.673 22:17:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:56.673 22:17:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.673 22:17:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.673 22:17:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.673 22:17:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:56.673 22:17:53 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:56.673 00:17:56.673 real 1m11.540s 00:17:56.673 user 1m45.124s 00:17:56.673 sys 0m27.831s 00:17:56.673 22:17:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:56.673 22:17:53 -- common/autotest_common.sh@10 -- # set +x 00:17:56.673 ************************************ 00:17:56.673 END TEST nvmf_tls 00:17:56.673 ************************************ 00:17:56.673 22:17:53 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:56.673 22:17:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:56.673 22:17:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:56.673 22:17:53 -- common/autotest_common.sh@10 -- # set +x 00:17:56.673 ************************************ 00:17:56.673 START TEST nvmf_fips 00:17:56.673 ************************************ 00:17:56.673 22:17:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:56.673 * Looking for test storage... 00:17:56.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:56.673 22:17:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:56.673 22:17:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:56.673 22:17:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:56.933 22:17:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:56.933 22:17:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:56.933 22:17:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:56.933 22:17:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:56.933 22:17:53 -- scripts/common.sh@335 -- # IFS=.-: 00:17:56.933 22:17:53 -- scripts/common.sh@335 -- # read -ra ver1 00:17:56.933 22:17:53 -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.933 22:17:53 -- scripts/common.sh@336 -- # read -ra ver2 00:17:56.933 22:17:53 -- scripts/common.sh@337 -- # local 'op=<' 00:17:56.933 22:17:53 -- scripts/common.sh@339 -- # ver1_l=2 00:17:56.933 22:17:53 -- scripts/common.sh@340 -- # ver2_l=1 00:17:56.933 22:17:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:56.933 22:17:53 -- scripts/common.sh@343 -- # case "$op" in 00:17:56.933 22:17:53 -- scripts/common.sh@344 -- # : 1 00:17:56.933 22:17:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:56.933 22:17:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.933 22:17:53 -- scripts/common.sh@364 -- # decimal 1 00:17:56.933 22:17:53 -- scripts/common.sh@352 -- # local d=1 00:17:56.933 22:17:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.933 22:17:53 -- scripts/common.sh@354 -- # echo 1 00:17:56.933 22:17:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:56.933 22:17:53 -- scripts/common.sh@365 -- # decimal 2 00:17:56.933 22:17:53 -- scripts/common.sh@352 -- # local d=2 00:17:56.933 22:17:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.933 22:17:53 -- scripts/common.sh@354 -- # echo 2 00:17:56.933 22:17:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:56.933 22:17:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:56.933 22:17:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:56.933 22:17:53 -- scripts/common.sh@367 -- # return 0 00:17:56.933 22:17:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.933 22:17:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:56.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.933 --rc genhtml_branch_coverage=1 00:17:56.933 --rc genhtml_function_coverage=1 00:17:56.933 --rc genhtml_legend=1 00:17:56.933 --rc geninfo_all_blocks=1 00:17:56.933 --rc geninfo_unexecuted_blocks=1 00:17:56.933 00:17:56.933 ' 00:17:56.933 22:17:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:56.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.933 --rc genhtml_branch_coverage=1 00:17:56.933 --rc genhtml_function_coverage=1 00:17:56.933 --rc genhtml_legend=1 00:17:56.933 --rc geninfo_all_blocks=1 00:17:56.933 --rc geninfo_unexecuted_blocks=1 00:17:56.933 00:17:56.933 ' 00:17:56.933 22:17:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:56.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.933 --rc genhtml_branch_coverage=1 00:17:56.933 --rc genhtml_function_coverage=1 00:17:56.933 --rc genhtml_legend=1 00:17:56.933 --rc geninfo_all_blocks=1 00:17:56.933 --rc geninfo_unexecuted_blocks=1 00:17:56.933 00:17:56.933 ' 00:17:56.933 22:17:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:56.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.933 --rc genhtml_branch_coverage=1 00:17:56.933 --rc genhtml_function_coverage=1 00:17:56.933 --rc genhtml_legend=1 00:17:56.933 --rc geninfo_all_blocks=1 00:17:56.933 --rc geninfo_unexecuted_blocks=1 00:17:56.933 00:17:56.933 ' 00:17:56.933 22:17:53 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:56.933 22:17:53 -- nvmf/common.sh@7 -- # uname -s 00:17:56.933 22:17:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.933 22:17:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.933 22:17:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.933 22:17:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.933 22:17:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.933 22:17:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.933 22:17:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.933 22:17:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.933 22:17:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.933 22:17:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.933 22:17:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:17:56.933 22:17:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:17:56.933 22:17:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.933 22:17:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.933 22:17:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:56.933 22:17:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:56.933 22:17:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.933 22:17:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.933 22:17:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.933 22:17:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.933 22:17:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.933 22:17:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.933 22:17:53 -- paths/export.sh@5 -- # export PATH 00:17:56.933 22:17:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.933 22:17:53 -- nvmf/common.sh@46 -- # : 0 00:17:56.933 22:17:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:56.933 22:17:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:56.933 22:17:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:56.933 22:17:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.933 22:17:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.933 22:17:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:56.933 22:17:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:56.933 22:17:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:56.933 22:17:53 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.933 22:17:53 -- fips/fips.sh@89 -- # check_openssl_version 00:17:56.933 22:17:53 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:56.933 22:17:53 -- fips/fips.sh@85 -- # openssl version 00:17:56.933 22:17:53 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:56.933 22:17:53 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:56.933 22:17:53 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:56.933 22:17:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:56.933 22:17:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:56.934 22:17:53 -- scripts/common.sh@335 -- # IFS=.-: 00:17:56.934 22:17:53 -- scripts/common.sh@335 -- # read -ra ver1 00:17:56.934 22:17:53 -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.934 22:17:53 -- scripts/common.sh@336 -- # read -ra ver2 00:17:56.934 22:17:53 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:56.934 22:17:53 -- scripts/common.sh@339 -- # ver1_l=3 00:17:56.934 22:17:53 -- scripts/common.sh@340 -- # ver2_l=3 00:17:56.934 22:17:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:56.934 22:17:53 -- scripts/common.sh@343 -- # case "$op" in 00:17:56.934 22:17:53 -- scripts/common.sh@347 -- # : 1 00:17:56.934 22:17:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:56.934 22:17:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.934 22:17:53 -- scripts/common.sh@364 -- # decimal 3 00:17:56.934 22:17:53 -- scripts/common.sh@352 -- # local d=3 00:17:56.934 22:17:53 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:56.934 22:17:53 -- scripts/common.sh@354 -- # echo 3 00:17:56.934 22:17:53 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:56.934 22:17:53 -- scripts/common.sh@365 -- # decimal 3 00:17:56.934 22:17:53 -- scripts/common.sh@352 -- # local d=3 00:17:56.934 22:17:53 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:56.934 22:17:53 -- scripts/common.sh@354 -- # echo 3 00:17:56.934 22:17:53 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:56.934 22:17:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:56.934 22:17:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:56.934 22:17:53 -- scripts/common.sh@363 -- # (( v++ )) 00:17:56.934 22:17:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.934 22:17:53 -- scripts/common.sh@364 -- # decimal 1 00:17:56.934 22:17:53 -- scripts/common.sh@352 -- # local d=1 00:17:56.934 22:17:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.934 22:17:53 -- scripts/common.sh@354 -- # echo 1 00:17:56.934 22:17:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:56.934 22:17:53 -- scripts/common.sh@365 -- # decimal 0 00:17:56.934 22:17:53 -- scripts/common.sh@352 -- # local d=0 00:17:56.934 22:17:53 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:56.934 22:17:53 -- scripts/common.sh@354 -- # echo 0 00:17:56.934 22:17:53 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:56.934 22:17:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:56.934 22:17:53 -- scripts/common.sh@366 -- # return 0 00:17:56.934 22:17:53 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:56.934 22:17:53 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:56.934 22:17:53 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:56.934 22:17:53 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:56.934 22:17:53 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:56.934 22:17:53 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:56.934 22:17:53 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:56.934 22:17:53 -- fips/fips.sh@113 -- # build_openssl_config 00:17:56.934 22:17:53 -- fips/fips.sh@37 -- # cat 00:17:56.934 22:17:53 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:56.934 22:17:53 -- fips/fips.sh@58 -- # cat - 00:17:56.934 22:17:53 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:56.934 22:17:53 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:56.934 22:17:53 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:56.934 22:17:53 -- fips/fips.sh@116 -- # openssl list -providers 00:17:56.934 22:17:53 -- fips/fips.sh@116 -- # grep name 00:17:56.934 22:17:53 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:56.934 22:17:53 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:56.934 22:17:53 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:56.934 22:17:53 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:56.934 22:17:53 -- fips/fips.sh@127 -- # : 00:17:56.934 22:17:53 -- common/autotest_common.sh@650 -- # local es=0 00:17:56.934 22:17:53 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:56.934 22:17:53 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:56.934 22:17:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.934 22:17:53 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:56.934 22:17:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.934 22:17:53 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:56.934 22:17:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.934 22:17:53 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:56.934 22:17:53 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:56.934 22:17:53 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:57.193 Error setting digest 00:17:57.193 40929597577F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:57.193 40929597577F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:57.193 22:17:53 -- common/autotest_common.sh@653 -- # es=1 00:17:57.193 22:17:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:57.193 22:17:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:57.193 22:17:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:57.193 22:17:53 -- fips/fips.sh@130 -- # nvmftestinit 00:17:57.193 22:17:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:57.193 22:17:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.193 22:17:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:57.193 22:17:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:57.193 22:17:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:57.193 22:17:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.193 22:17:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.193 22:17:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.193 22:17:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:57.193 22:17:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:57.193 22:17:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:57.193 22:17:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:57.193 22:17:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:57.193 22:17:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:57.193 22:17:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.193 22:17:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.193 22:17:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:57.193 22:17:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:57.193 22:17:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.193 22:17:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.193 22:17:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.193 22:17:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.193 22:17:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.193 22:17:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.193 22:17:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.193 22:17:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.193 22:17:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:57.193 22:17:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:57.193 Cannot find device "nvmf_tgt_br" 00:17:57.193 22:17:53 -- nvmf/common.sh@154 -- # true 00:17:57.193 22:17:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.193 Cannot find device "nvmf_tgt_br2" 00:17:57.193 22:17:53 -- nvmf/common.sh@155 -- # true 00:17:57.193 22:17:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:57.193 22:17:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:57.193 Cannot find device "nvmf_tgt_br" 00:17:57.193 22:17:53 -- nvmf/common.sh@157 -- # true 00:17:57.193 22:17:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:57.193 Cannot find device "nvmf_tgt_br2" 00:17:57.193 22:17:53 -- nvmf/common.sh@158 -- # true 00:17:57.193 22:17:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:57.193 22:17:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:57.193 22:17:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.193 22:17:53 -- nvmf/common.sh@161 -- # true 00:17:57.193 22:17:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.193 22:17:53 -- nvmf/common.sh@162 -- # true 00:17:57.193 22:17:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.193 22:17:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.193 22:17:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.193 22:17:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.193 22:17:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.193 22:17:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.193 22:17:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.193 22:17:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:57.193 22:17:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:57.193 22:17:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:57.193 22:17:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:57.193 22:17:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:57.193 22:17:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:57.193 22:17:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.193 22:17:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.453 22:17:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.453 22:17:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:57.453 22:17:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:57.453 22:17:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.453 22:17:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.453 22:17:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.453 22:17:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.453 22:17:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.453 22:17:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:57.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:17:57.453 00:17:57.453 --- 10.0.0.2 ping statistics --- 00:17:57.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.453 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:57.453 22:17:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:57.453 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.453 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:57.453 00:17:57.453 --- 10.0.0.3 ping statistics --- 00:17:57.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.453 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:57.453 22:17:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:17:57.453 00:17:57.453 --- 10.0.0.1 ping statistics --- 00:17:57.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.453 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:57.453 22:17:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.453 22:17:53 -- nvmf/common.sh@421 -- # return 0 00:17:57.453 22:17:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:57.453 22:17:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.453 22:17:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:57.453 22:17:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:57.453 22:17:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.453 22:17:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:57.453 22:17:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:57.453 22:17:53 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:57.453 22:17:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:57.453 22:17:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.453 22:17:53 -- common/autotest_common.sh@10 -- # set +x 00:17:57.453 22:17:53 -- nvmf/common.sh@469 -- # nvmfpid=79390 00:17:57.453 22:17:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:57.453 22:17:53 -- nvmf/common.sh@470 -- # waitforlisten 79390 00:17:57.453 22:17:53 -- common/autotest_common.sh@829 -- # '[' -z 79390 ']' 00:17:57.453 22:17:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.453 22:17:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.453 22:17:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.453 22:17:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.453 22:17:53 -- common/autotest_common.sh@10 -- # set +x 00:17:57.453 [2024-11-17 22:17:54.004288] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:57.453 [2024-11-17 22:17:54.004377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.712 [2024-11-17 22:17:54.139481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.712 [2024-11-17 22:17:54.245144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:57.712 [2024-11-17 22:17:54.245317] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.712 [2024-11-17 22:17:54.245334] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.712 [2024-11-17 22:17:54.245346] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.712 [2024-11-17 22:17:54.245385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.648 22:17:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.648 22:17:55 -- common/autotest_common.sh@862 -- # return 0 00:17:58.648 22:17:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:58.648 22:17:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.648 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:58.648 22:17:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.648 22:17:55 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:58.648 22:17:55 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:58.648 22:17:55 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:58.648 22:17:55 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:58.648 22:17:55 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:58.648 22:17:55 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:58.648 22:17:55 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:58.648 22:17:55 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.908 [2024-11-17 22:17:55.320016] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.908 [2024-11-17 22:17:55.335993] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:58.908 [2024-11-17 22:17:55.336185] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.908 malloc0 00:17:58.908 22:17:55 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.908 22:17:55 -- fips/fips.sh@147 -- # bdevperf_pid=79443 00:17:58.908 22:17:55 -- fips/fips.sh@148 -- # waitforlisten 79443 /var/tmp/bdevperf.sock 00:17:58.908 22:17:55 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.908 22:17:55 -- common/autotest_common.sh@829 -- # '[' -z 79443 ']' 00:17:58.908 22:17:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.908 22:17:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.908 22:17:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.908 22:17:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.908 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:58.908 [2024-11-17 22:17:55.453946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:58.908 [2024-11-17 22:17:55.454042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79443 ] 00:17:59.167 [2024-11-17 22:17:55.587138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.167 [2024-11-17 22:17:55.668254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.104 22:17:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.104 22:17:56 -- common/autotest_common.sh@862 -- # return 0 00:18:00.104 22:17:56 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:00.104 [2024-11-17 22:17:56.574999] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.104 TLSTESTn1 00:18:00.104 22:17:56 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:00.363 Running I/O for 10 seconds... 00:18:10.337 00:18:10.337 Latency(us) 00:18:10.337 [2024-11-17T22:18:06.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.337 [2024-11-17T22:18:06.952Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.337 Verification LBA range: start 0x0 length 0x2000 00:18:10.337 TLSTESTn1 : 10.01 6424.57 25.10 0.00 0.00 19892.99 5540.77 21567.30 00:18:10.337 [2024-11-17T22:18:06.952Z] =================================================================================================================== 00:18:10.337 [2024-11-17T22:18:06.952Z] Total : 6424.57 25.10 0.00 0.00 19892.99 5540.77 21567.30 00:18:10.337 0 00:18:10.337 22:18:06 -- fips/fips.sh@1 -- # cleanup 00:18:10.337 22:18:06 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:10.337 22:18:06 -- common/autotest_common.sh@806 -- # type=--id 00:18:10.337 22:18:06 -- common/autotest_common.sh@807 -- # id=0 00:18:10.337 22:18:06 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:10.337 22:18:06 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:10.337 22:18:06 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:10.337 22:18:06 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:10.337 22:18:06 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:10.337 22:18:06 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:10.337 nvmf_trace.0 00:18:10.337 22:18:06 -- common/autotest_common.sh@821 -- # return 0 00:18:10.337 22:18:06 -- fips/fips.sh@16 -- # killprocess 79443 00:18:10.337 22:18:06 -- common/autotest_common.sh@936 -- # '[' -z 79443 ']' 00:18:10.337 22:18:06 -- common/autotest_common.sh@940 -- # kill -0 79443 00:18:10.337 22:18:06 -- common/autotest_common.sh@941 -- # uname 00:18:10.337 22:18:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:10.337 22:18:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79443 00:18:10.337 22:18:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:10.337 22:18:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:10.337 killing process with pid 79443 00:18:10.337 22:18:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79443' 00:18:10.337 22:18:06 -- common/autotest_common.sh@955 -- # kill 79443 00:18:10.337 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.337 00:18:10.337 Latency(us) 00:18:10.337 [2024-11-17T22:18:06.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.337 [2024-11-17T22:18:06.952Z] =================================================================================================================== 00:18:10.337 [2024-11-17T22:18:06.952Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.337 22:18:06 -- common/autotest_common.sh@960 -- # wait 79443 00:18:10.905 22:18:07 -- fips/fips.sh@17 -- # nvmftestfini 00:18:10.905 22:18:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:10.905 22:18:07 -- nvmf/common.sh@116 -- # sync 00:18:10.905 22:18:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:10.905 22:18:07 -- nvmf/common.sh@119 -- # set +e 00:18:10.905 22:18:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:10.905 22:18:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:10.905 rmmod nvme_tcp 00:18:10.905 rmmod nvme_fabrics 00:18:10.905 rmmod nvme_keyring 00:18:10.905 22:18:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:10.905 22:18:07 -- nvmf/common.sh@123 -- # set -e 00:18:10.905 22:18:07 -- nvmf/common.sh@124 -- # return 0 00:18:10.905 22:18:07 -- nvmf/common.sh@477 -- # '[' -n 79390 ']' 00:18:10.905 22:18:07 -- nvmf/common.sh@478 -- # killprocess 79390 00:18:10.905 22:18:07 -- common/autotest_common.sh@936 -- # '[' -z 79390 ']' 00:18:10.905 22:18:07 -- common/autotest_common.sh@940 -- # kill -0 79390 00:18:10.905 22:18:07 -- common/autotest_common.sh@941 -- # uname 00:18:10.905 22:18:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:10.905 22:18:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79390 00:18:10.905 22:18:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:10.905 22:18:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:10.906 22:18:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79390' 00:18:10.906 killing process with pid 79390 00:18:10.906 22:18:07 -- common/autotest_common.sh@955 -- # kill 79390 00:18:10.906 22:18:07 -- common/autotest_common.sh@960 -- # wait 79390 00:18:11.164 22:18:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:11.164 22:18:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:11.164 22:18:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:11.164 22:18:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.164 22:18:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:11.164 22:18:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.164 22:18:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.164 22:18:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.164 22:18:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:11.164 22:18:07 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:11.164 00:18:11.164 real 0m14.476s 00:18:11.164 user 0m18.416s 00:18:11.164 sys 0m6.619s 00:18:11.164 22:18:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:11.164 22:18:07 -- common/autotest_common.sh@10 -- # set +x 00:18:11.164 ************************************ 00:18:11.164 END TEST nvmf_fips 00:18:11.164 ************************************ 00:18:11.164 22:18:07 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:11.164 22:18:07 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:11.164 22:18:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:11.164 22:18:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:11.164 22:18:07 -- common/autotest_common.sh@10 -- # set +x 00:18:11.164 ************************************ 00:18:11.164 START TEST nvmf_fuzz 00:18:11.164 ************************************ 00:18:11.164 22:18:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:11.424 * Looking for test storage... 00:18:11.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:11.424 22:18:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:11.424 22:18:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:11.424 22:18:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:11.424 22:18:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:11.424 22:18:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:11.424 22:18:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:11.424 22:18:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:11.424 22:18:07 -- scripts/common.sh@335 -- # IFS=.-: 00:18:11.424 22:18:07 -- scripts/common.sh@335 -- # read -ra ver1 00:18:11.424 22:18:07 -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.424 22:18:07 -- scripts/common.sh@336 -- # read -ra ver2 00:18:11.424 22:18:07 -- scripts/common.sh@337 -- # local 'op=<' 00:18:11.424 22:18:07 -- scripts/common.sh@339 -- # ver1_l=2 00:18:11.424 22:18:07 -- scripts/common.sh@340 -- # ver2_l=1 00:18:11.424 22:18:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:11.424 22:18:07 -- scripts/common.sh@343 -- # case "$op" in 00:18:11.424 22:18:07 -- scripts/common.sh@344 -- # : 1 00:18:11.424 22:18:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:11.424 22:18:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.424 22:18:07 -- scripts/common.sh@364 -- # decimal 1 00:18:11.424 22:18:07 -- scripts/common.sh@352 -- # local d=1 00:18:11.424 22:18:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.424 22:18:07 -- scripts/common.sh@354 -- # echo 1 00:18:11.424 22:18:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:11.424 22:18:07 -- scripts/common.sh@365 -- # decimal 2 00:18:11.424 22:18:07 -- scripts/common.sh@352 -- # local d=2 00:18:11.424 22:18:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.424 22:18:07 -- scripts/common.sh@354 -- # echo 2 00:18:11.424 22:18:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:11.424 22:18:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:11.424 22:18:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:11.424 22:18:07 -- scripts/common.sh@367 -- # return 0 00:18:11.424 22:18:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.424 22:18:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:11.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.424 --rc genhtml_branch_coverage=1 00:18:11.424 --rc genhtml_function_coverage=1 00:18:11.424 --rc genhtml_legend=1 00:18:11.424 --rc geninfo_all_blocks=1 00:18:11.424 --rc geninfo_unexecuted_blocks=1 00:18:11.424 00:18:11.424 ' 00:18:11.424 22:18:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:11.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.424 --rc genhtml_branch_coverage=1 00:18:11.424 --rc genhtml_function_coverage=1 00:18:11.424 --rc genhtml_legend=1 00:18:11.424 --rc geninfo_all_blocks=1 00:18:11.424 --rc geninfo_unexecuted_blocks=1 00:18:11.424 00:18:11.424 ' 00:18:11.424 22:18:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:11.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.424 --rc genhtml_branch_coverage=1 00:18:11.424 --rc genhtml_function_coverage=1 00:18:11.424 --rc genhtml_legend=1 00:18:11.424 --rc geninfo_all_blocks=1 00:18:11.424 --rc geninfo_unexecuted_blocks=1 00:18:11.424 00:18:11.424 ' 00:18:11.424 22:18:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:11.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.424 --rc genhtml_branch_coverage=1 00:18:11.424 --rc genhtml_function_coverage=1 00:18:11.424 --rc genhtml_legend=1 00:18:11.424 --rc geninfo_all_blocks=1 00:18:11.424 --rc geninfo_unexecuted_blocks=1 00:18:11.424 00:18:11.424 ' 00:18:11.424 22:18:07 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.424 22:18:07 -- nvmf/common.sh@7 -- # uname -s 00:18:11.424 22:18:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.424 22:18:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.424 22:18:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.424 22:18:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.424 22:18:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.424 22:18:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.424 22:18:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.424 22:18:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.424 22:18:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.424 22:18:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.424 22:18:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:18:11.424 22:18:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:18:11.424 22:18:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.424 22:18:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.424 22:18:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.424 22:18:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.425 22:18:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.425 22:18:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.425 22:18:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.425 22:18:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.425 22:18:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.425 22:18:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.425 22:18:07 -- paths/export.sh@5 -- # export PATH 00:18:11.425 22:18:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.425 22:18:07 -- nvmf/common.sh@46 -- # : 0 00:18:11.425 22:18:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:11.425 22:18:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:11.425 22:18:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:11.425 22:18:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.425 22:18:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.425 22:18:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:11.425 22:18:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:11.425 22:18:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:11.425 22:18:07 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:11.425 22:18:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:11.425 22:18:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.425 22:18:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:11.425 22:18:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:11.425 22:18:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:11.425 22:18:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.425 22:18:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.425 22:18:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.425 22:18:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:11.425 22:18:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:11.425 22:18:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:11.425 22:18:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:11.425 22:18:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:11.425 22:18:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:11.425 22:18:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.425 22:18:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.425 22:18:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:11.425 22:18:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:11.425 22:18:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.425 22:18:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.425 22:18:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.425 22:18:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.425 22:18:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.425 22:18:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.425 22:18:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.425 22:18:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.425 22:18:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:11.425 22:18:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:11.425 Cannot find device "nvmf_tgt_br" 00:18:11.425 22:18:07 -- nvmf/common.sh@154 -- # true 00:18:11.425 22:18:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.425 Cannot find device "nvmf_tgt_br2" 00:18:11.425 22:18:07 -- nvmf/common.sh@155 -- # true 00:18:11.425 22:18:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:11.425 22:18:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:11.425 Cannot find device "nvmf_tgt_br" 00:18:11.425 22:18:07 -- nvmf/common.sh@157 -- # true 00:18:11.425 22:18:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:11.425 Cannot find device "nvmf_tgt_br2" 00:18:11.425 22:18:08 -- nvmf/common.sh@158 -- # true 00:18:11.425 22:18:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:11.726 22:18:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:11.727 22:18:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.727 22:18:08 -- nvmf/common.sh@161 -- # true 00:18:11.727 22:18:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.727 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.727 22:18:08 -- nvmf/common.sh@162 -- # true 00:18:11.727 22:18:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.727 22:18:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.727 22:18:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.727 22:18:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.727 22:18:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.727 22:18:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.727 22:18:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.727 22:18:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:11.727 22:18:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:11.727 22:18:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:11.727 22:18:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:11.727 22:18:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:11.727 22:18:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:11.727 22:18:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.727 22:18:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.727 22:18:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.727 22:18:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:11.727 22:18:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:11.727 22:18:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.727 22:18:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.727 22:18:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.727 22:18:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.727 22:18:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.727 22:18:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:11.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:11.727 00:18:11.727 --- 10.0.0.2 ping statistics --- 00:18:11.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.727 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:11.727 22:18:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:11.727 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.727 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:11.727 00:18:11.727 --- 10.0.0.3 ping statistics --- 00:18:11.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.727 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:11.727 22:18:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:11.727 00:18:11.727 --- 10.0.0.1 ping statistics --- 00:18:11.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.727 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:11.727 22:18:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.727 22:18:08 -- nvmf/common.sh@421 -- # return 0 00:18:11.727 22:18:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:11.727 22:18:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.727 22:18:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:11.727 22:18:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:11.727 22:18:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.727 22:18:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:11.727 22:18:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:11.727 22:18:08 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=79797 00:18:11.727 22:18:08 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:11.727 22:18:08 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:11.727 22:18:08 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 79797 00:18:11.727 22:18:08 -- common/autotest_common.sh@829 -- # '[' -z 79797 ']' 00:18:11.727 22:18:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.727 22:18:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.727 22:18:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.727 22:18:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.727 22:18:08 -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 22:18:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.106 22:18:09 -- common/autotest_common.sh@862 -- # return 0 00:18:13.106 22:18:09 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:13.106 22:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.106 22:18:09 -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 22:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.106 22:18:09 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:13.106 22:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.106 22:18:09 -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 Malloc0 00:18:13.106 22:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.106 22:18:09 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.106 22:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.106 22:18:09 -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 22:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.106 22:18:09 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.106 22:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.106 22:18:09 -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 22:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.106 22:18:09 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.106 22:18:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.106 22:18:09 -- common/autotest_common.sh@10 -- # set +x 00:18:13.106 22:18:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.106 22:18:09 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:13.106 22:18:09 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:13.365 Shutting down the fuzz application 00:18:13.365 22:18:09 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:13.624 Shutting down the fuzz application 00:18:13.624 22:18:10 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.624 22:18:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.624 22:18:10 -- common/autotest_common.sh@10 -- # set +x 00:18:13.624 22:18:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.624 22:18:10 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:13.624 22:18:10 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:13.624 22:18:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:13.624 22:18:10 -- nvmf/common.sh@116 -- # sync 00:18:13.624 22:18:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:13.624 22:18:10 -- nvmf/common.sh@119 -- # set +e 00:18:13.624 22:18:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:13.624 22:18:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:13.624 rmmod nvme_tcp 00:18:13.624 rmmod nvme_fabrics 00:18:13.624 rmmod nvme_keyring 00:18:13.624 22:18:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:13.624 22:18:10 -- nvmf/common.sh@123 -- # set -e 00:18:13.624 22:18:10 -- nvmf/common.sh@124 -- # return 0 00:18:13.624 22:18:10 -- nvmf/common.sh@477 -- # '[' -n 79797 ']' 00:18:13.624 22:18:10 -- nvmf/common.sh@478 -- # killprocess 79797 00:18:13.624 22:18:10 -- common/autotest_common.sh@936 -- # '[' -z 79797 ']' 00:18:13.624 22:18:10 -- common/autotest_common.sh@940 -- # kill -0 79797 00:18:13.624 22:18:10 -- common/autotest_common.sh@941 -- # uname 00:18:13.624 22:18:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.624 22:18:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79797 00:18:13.884 22:18:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:13.884 22:18:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:13.884 killing process with pid 79797 00:18:13.884 22:18:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79797' 00:18:13.884 22:18:10 -- common/autotest_common.sh@955 -- # kill 79797 00:18:13.884 22:18:10 -- common/autotest_common.sh@960 -- # wait 79797 00:18:14.143 22:18:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:14.143 22:18:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:14.143 22:18:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:14.143 22:18:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.143 22:18:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:14.143 22:18:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.143 22:18:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.143 22:18:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.143 22:18:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:14.143 22:18:10 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:14.143 00:18:14.143 real 0m2.914s 00:18:14.143 user 0m3.011s 00:18:14.143 sys 0m0.746s 00:18:14.143 22:18:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:14.143 ************************************ 00:18:14.143 22:18:10 -- common/autotest_common.sh@10 -- # set +x 00:18:14.143 END TEST nvmf_fuzz 00:18:14.143 ************************************ 00:18:14.143 22:18:10 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:14.143 22:18:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:14.143 22:18:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:14.143 22:18:10 -- common/autotest_common.sh@10 -- # set +x 00:18:14.143 ************************************ 00:18:14.143 START TEST nvmf_multiconnection 00:18:14.143 ************************************ 00:18:14.143 22:18:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:14.143 * Looking for test storage... 00:18:14.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:14.402 22:18:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:14.402 22:18:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:14.402 22:18:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:14.402 22:18:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:14.402 22:18:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:14.402 22:18:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:14.402 22:18:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:14.402 22:18:10 -- scripts/common.sh@335 -- # IFS=.-: 00:18:14.402 22:18:10 -- scripts/common.sh@335 -- # read -ra ver1 00:18:14.402 22:18:10 -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.402 22:18:10 -- scripts/common.sh@336 -- # read -ra ver2 00:18:14.402 22:18:10 -- scripts/common.sh@337 -- # local 'op=<' 00:18:14.402 22:18:10 -- scripts/common.sh@339 -- # ver1_l=2 00:18:14.402 22:18:10 -- scripts/common.sh@340 -- # ver2_l=1 00:18:14.402 22:18:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:14.402 22:18:10 -- scripts/common.sh@343 -- # case "$op" in 00:18:14.402 22:18:10 -- scripts/common.sh@344 -- # : 1 00:18:14.402 22:18:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:14.402 22:18:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.402 22:18:10 -- scripts/common.sh@364 -- # decimal 1 00:18:14.402 22:18:10 -- scripts/common.sh@352 -- # local d=1 00:18:14.402 22:18:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.402 22:18:10 -- scripts/common.sh@354 -- # echo 1 00:18:14.402 22:18:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:14.402 22:18:10 -- scripts/common.sh@365 -- # decimal 2 00:18:14.402 22:18:10 -- scripts/common.sh@352 -- # local d=2 00:18:14.402 22:18:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.402 22:18:10 -- scripts/common.sh@354 -- # echo 2 00:18:14.402 22:18:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:14.402 22:18:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:14.402 22:18:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:14.402 22:18:10 -- scripts/common.sh@367 -- # return 0 00:18:14.402 22:18:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.402 22:18:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:14.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.402 --rc genhtml_branch_coverage=1 00:18:14.402 --rc genhtml_function_coverage=1 00:18:14.402 --rc genhtml_legend=1 00:18:14.402 --rc geninfo_all_blocks=1 00:18:14.402 --rc geninfo_unexecuted_blocks=1 00:18:14.402 00:18:14.402 ' 00:18:14.402 22:18:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:14.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.402 --rc genhtml_branch_coverage=1 00:18:14.402 --rc genhtml_function_coverage=1 00:18:14.402 --rc genhtml_legend=1 00:18:14.402 --rc geninfo_all_blocks=1 00:18:14.402 --rc geninfo_unexecuted_blocks=1 00:18:14.402 00:18:14.402 ' 00:18:14.402 22:18:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:14.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.402 --rc genhtml_branch_coverage=1 00:18:14.402 --rc genhtml_function_coverage=1 00:18:14.402 --rc genhtml_legend=1 00:18:14.402 --rc geninfo_all_blocks=1 00:18:14.402 --rc geninfo_unexecuted_blocks=1 00:18:14.402 00:18:14.402 ' 00:18:14.402 22:18:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:14.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.402 --rc genhtml_branch_coverage=1 00:18:14.402 --rc genhtml_function_coverage=1 00:18:14.402 --rc genhtml_legend=1 00:18:14.402 --rc geninfo_all_blocks=1 00:18:14.402 --rc geninfo_unexecuted_blocks=1 00:18:14.402 00:18:14.402 ' 00:18:14.402 22:18:10 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.402 22:18:10 -- nvmf/common.sh@7 -- # uname -s 00:18:14.402 22:18:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.402 22:18:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.402 22:18:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.402 22:18:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.402 22:18:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.402 22:18:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.402 22:18:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.402 22:18:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.402 22:18:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.402 22:18:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.402 22:18:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:18:14.402 22:18:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:18:14.402 22:18:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.402 22:18:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.402 22:18:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.403 22:18:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.403 22:18:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.403 22:18:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.403 22:18:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.403 22:18:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.403 22:18:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.403 22:18:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.403 22:18:10 -- paths/export.sh@5 -- # export PATH 00:18:14.403 22:18:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.403 22:18:10 -- nvmf/common.sh@46 -- # : 0 00:18:14.403 22:18:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:14.403 22:18:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:14.403 22:18:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:14.403 22:18:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.403 22:18:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.403 22:18:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:14.403 22:18:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:14.403 22:18:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:14.403 22:18:10 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.403 22:18:10 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.403 22:18:10 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:14.403 22:18:10 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:14.403 22:18:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:14.403 22:18:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.403 22:18:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:14.403 22:18:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:14.403 22:18:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:14.403 22:18:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.403 22:18:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.403 22:18:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.403 22:18:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:14.403 22:18:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:14.403 22:18:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:14.403 22:18:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:14.403 22:18:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:14.403 22:18:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:14.403 22:18:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.403 22:18:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.403 22:18:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:14.403 22:18:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:14.403 22:18:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.403 22:18:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.403 22:18:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.403 22:18:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.403 22:18:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.403 22:18:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.403 22:18:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.403 22:18:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.403 22:18:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:14.403 22:18:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:14.403 Cannot find device "nvmf_tgt_br" 00:18:14.403 22:18:10 -- nvmf/common.sh@154 -- # true 00:18:14.403 22:18:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.403 Cannot find device "nvmf_tgt_br2" 00:18:14.403 22:18:10 -- nvmf/common.sh@155 -- # true 00:18:14.403 22:18:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:14.403 22:18:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:14.403 Cannot find device "nvmf_tgt_br" 00:18:14.403 22:18:10 -- nvmf/common.sh@157 -- # true 00:18:14.403 22:18:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:14.403 Cannot find device "nvmf_tgt_br2" 00:18:14.403 22:18:10 -- nvmf/common.sh@158 -- # true 00:18:14.403 22:18:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:14.403 22:18:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:14.403 22:18:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.662 22:18:11 -- nvmf/common.sh@161 -- # true 00:18:14.662 22:18:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.662 22:18:11 -- nvmf/common.sh@162 -- # true 00:18:14.662 22:18:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.662 22:18:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.662 22:18:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.662 22:18:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.662 22:18:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.662 22:18:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.662 22:18:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.662 22:18:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:14.662 22:18:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:14.662 22:18:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:14.662 22:18:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:14.662 22:18:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:14.662 22:18:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:14.662 22:18:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.662 22:18:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.662 22:18:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.662 22:18:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:14.662 22:18:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:14.662 22:18:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.662 22:18:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.662 22:18:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.662 22:18:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.662 22:18:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.662 22:18:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:14.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:18:14.662 00:18:14.662 --- 10.0.0.2 ping statistics --- 00:18:14.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.662 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:14.662 22:18:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:14.662 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.662 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:14.662 00:18:14.662 --- 10.0.0.3 ping statistics --- 00:18:14.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.662 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:14.662 22:18:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:14.662 00:18:14.662 --- 10.0.0.1 ping statistics --- 00:18:14.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.662 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:14.662 22:18:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.662 22:18:11 -- nvmf/common.sh@421 -- # return 0 00:18:14.662 22:18:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:14.662 22:18:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.662 22:18:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:14.662 22:18:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:14.662 22:18:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.662 22:18:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:14.662 22:18:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:14.662 22:18:11 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:14.662 22:18:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:14.662 22:18:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.662 22:18:11 -- common/autotest_common.sh@10 -- # set +x 00:18:14.662 22:18:11 -- nvmf/common.sh@469 -- # nvmfpid=80011 00:18:14.662 22:18:11 -- nvmf/common.sh@470 -- # waitforlisten 80011 00:18:14.662 22:18:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:14.662 22:18:11 -- common/autotest_common.sh@829 -- # '[' -z 80011 ']' 00:18:14.662 22:18:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.662 22:18:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.662 22:18:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.662 22:18:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.662 22:18:11 -- common/autotest_common.sh@10 -- # set +x 00:18:14.920 [2024-11-17 22:18:11.301819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:14.920 [2024-11-17 22:18:11.301923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.920 [2024-11-17 22:18:11.440869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:14.920 [2024-11-17 22:18:11.526305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:14.921 [2024-11-17 22:18:11.526458] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.921 [2024-11-17 22:18:11.526471] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.921 [2024-11-17 22:18:11.526479] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.921 [2024-11-17 22:18:11.527035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.921 [2024-11-17 22:18:11.527153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.921 [2024-11-17 22:18:11.527194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:14.921 [2024-11-17 22:18:11.527204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.858 22:18:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.858 22:18:12 -- common/autotest_common.sh@862 -- # return 0 00:18:15.858 22:18:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:15.858 22:18:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.858 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:15.858 22:18:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.858 22:18:12 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.858 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.858 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:15.858 [2024-11-17 22:18:12.404457] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.858 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.858 22:18:12 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:15.858 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:15.858 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:15.858 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.858 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:15.858 Malloc1 00:18:15.858 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.858 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:15.858 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.858 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:15.858 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.858 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:15.858 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.858 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.118 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.118 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.118 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.118 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.118 [2024-11-17 22:18:12.479947] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.118 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.118 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.118 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:16.118 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.118 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.118 Malloc2 00:18:16.118 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.119 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 Malloc3 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.119 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 Malloc4 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.119 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 Malloc5 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.119 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 Malloc6 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.119 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.119 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:16.119 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.119 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.379 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 Malloc7 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.379 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 Malloc8 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.379 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 Malloc9 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.379 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 Malloc10 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.379 22:18:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 Malloc11 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.379 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.379 22:18:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:16.379 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.379 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.638 22:18:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.638 22:18:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:16.638 22:18:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.638 22:18:12 -- common/autotest_common.sh@10 -- # set +x 00:18:16.638 22:18:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.638 22:18:13 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:16.638 22:18:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.638 22:18:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:16.638 22:18:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:16.638 22:18:13 -- common/autotest_common.sh@1187 -- # local i=0 00:18:16.638 22:18:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.638 22:18:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:16.638 22:18:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:19.171 22:18:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:19.171 22:18:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:19.171 22:18:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:19.171 22:18:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:19.171 22:18:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.171 22:18:15 -- common/autotest_common.sh@1197 -- # return 0 00:18:19.171 22:18:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:19.171 22:18:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:19.171 22:18:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:19.171 22:18:15 -- common/autotest_common.sh@1187 -- # local i=0 00:18:19.171 22:18:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.171 22:18:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:19.171 22:18:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:21.074 22:18:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:21.074 22:18:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:21.074 22:18:17 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:21.074 22:18:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:21.074 22:18:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.074 22:18:17 -- common/autotest_common.sh@1197 -- # return 0 00:18:21.074 22:18:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.074 22:18:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:21.074 22:18:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:21.074 22:18:17 -- common/autotest_common.sh@1187 -- # local i=0 00:18:21.074 22:18:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.074 22:18:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:21.074 22:18:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:22.979 22:18:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:23.238 22:18:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:23.238 22:18:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:23.238 22:18:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:23.238 22:18:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.238 22:18:19 -- common/autotest_common.sh@1197 -- # return 0 00:18:23.238 22:18:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:23.239 22:18:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:23.239 22:18:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:23.239 22:18:19 -- common/autotest_common.sh@1187 -- # local i=0 00:18:23.239 22:18:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.239 22:18:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:23.239 22:18:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:25.773 22:18:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:25.773 22:18:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:25.773 22:18:21 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:25.773 22:18:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:25.773 22:18:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.773 22:18:21 -- common/autotest_common.sh@1197 -- # return 0 00:18:25.773 22:18:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:25.773 22:18:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:25.773 22:18:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:25.773 22:18:21 -- common/autotest_common.sh@1187 -- # local i=0 00:18:25.773 22:18:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.773 22:18:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:25.773 22:18:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:27.677 22:18:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:27.677 22:18:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:27.677 22:18:24 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:27.677 22:18:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:27.677 22:18:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.677 22:18:24 -- common/autotest_common.sh@1197 -- # return 0 00:18:27.677 22:18:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:27.677 22:18:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:27.677 22:18:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:27.677 22:18:24 -- common/autotest_common.sh@1187 -- # local i=0 00:18:27.677 22:18:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.677 22:18:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:27.677 22:18:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:29.595 22:18:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:29.595 22:18:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:29.595 22:18:26 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:29.853 22:18:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:29.853 22:18:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.853 22:18:26 -- common/autotest_common.sh@1197 -- # return 0 00:18:29.853 22:18:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:29.853 22:18:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:29.853 22:18:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:29.853 22:18:26 -- common/autotest_common.sh@1187 -- # local i=0 00:18:29.853 22:18:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.853 22:18:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:29.853 22:18:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:32.420 22:18:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:32.420 22:18:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:32.420 22:18:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:32.420 22:18:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:32.420 22:18:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.420 22:18:28 -- common/autotest_common.sh@1197 -- # return 0 00:18:32.420 22:18:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.420 22:18:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:32.420 22:18:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:32.420 22:18:28 -- common/autotest_common.sh@1187 -- # local i=0 00:18:32.420 22:18:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.420 22:18:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:32.420 22:18:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:34.323 22:18:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:34.323 22:18:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:34.323 22:18:30 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:34.323 22:18:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:34.323 22:18:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.323 22:18:30 -- common/autotest_common.sh@1197 -- # return 0 00:18:34.323 22:18:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.323 22:18:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:34.323 22:18:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:34.323 22:18:30 -- common/autotest_common.sh@1187 -- # local i=0 00:18:34.323 22:18:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.323 22:18:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:34.323 22:18:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:36.229 22:18:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:36.229 22:18:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:36.229 22:18:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:36.229 22:18:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:36.229 22:18:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.229 22:18:32 -- common/autotest_common.sh@1197 -- # return 0 00:18:36.229 22:18:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.229 22:18:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:36.488 22:18:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:36.488 22:18:33 -- common/autotest_common.sh@1187 -- # local i=0 00:18:36.488 22:18:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.488 22:18:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:36.488 22:18:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:39.021 22:18:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:39.021 22:18:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:39.021 22:18:35 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:39.021 22:18:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:39.021 22:18:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:39.021 22:18:35 -- common/autotest_common.sh@1197 -- # return 0 00:18:39.021 22:18:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.021 22:18:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:39.021 22:18:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:39.021 22:18:35 -- common/autotest_common.sh@1187 -- # local i=0 00:18:39.021 22:18:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:39.021 22:18:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:39.021 22:18:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:40.926 22:18:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:40.926 22:18:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:40.926 22:18:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:40.926 22:18:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:40.926 22:18:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.926 22:18:37 -- common/autotest_common.sh@1197 -- # return 0 00:18:40.926 22:18:37 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:40.926 [global] 00:18:40.926 thread=1 00:18:40.926 invalidate=1 00:18:40.926 rw=read 00:18:40.926 time_based=1 00:18:40.926 runtime=10 00:18:40.926 ioengine=libaio 00:18:40.926 direct=1 00:18:40.926 bs=262144 00:18:40.926 iodepth=64 00:18:40.926 norandommap=1 00:18:40.926 numjobs=1 00:18:40.926 00:18:40.926 [job0] 00:18:40.926 filename=/dev/nvme0n1 00:18:40.926 [job1] 00:18:40.926 filename=/dev/nvme10n1 00:18:40.926 [job2] 00:18:40.926 filename=/dev/nvme1n1 00:18:40.926 [job3] 00:18:40.926 filename=/dev/nvme2n1 00:18:40.926 [job4] 00:18:40.926 filename=/dev/nvme3n1 00:18:40.926 [job5] 00:18:40.926 filename=/dev/nvme4n1 00:18:40.926 [job6] 00:18:40.926 filename=/dev/nvme5n1 00:18:40.926 [job7] 00:18:40.926 filename=/dev/nvme6n1 00:18:40.926 [job8] 00:18:40.926 filename=/dev/nvme7n1 00:18:40.926 [job9] 00:18:40.926 filename=/dev/nvme8n1 00:18:40.926 [job10] 00:18:40.926 filename=/dev/nvme9n1 00:18:40.926 Could not set queue depth (nvme0n1) 00:18:40.926 Could not set queue depth (nvme10n1) 00:18:40.926 Could not set queue depth (nvme1n1) 00:18:40.926 Could not set queue depth (nvme2n1) 00:18:40.926 Could not set queue depth (nvme3n1) 00:18:40.926 Could not set queue depth (nvme4n1) 00:18:40.926 Could not set queue depth (nvme5n1) 00:18:40.926 Could not set queue depth (nvme6n1) 00:18:40.926 Could not set queue depth (nvme7n1) 00:18:40.926 Could not set queue depth (nvme8n1) 00:18:40.926 Could not set queue depth (nvme9n1) 00:18:41.185 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.185 fio-3.35 00:18:41.185 Starting 11 threads 00:18:53.394 00:18:53.394 job0: (groupid=0, jobs=1): err= 0: pid=80494: Sun Nov 17 22:18:47 2024 00:18:53.394 read: IOPS=542, BW=136MiB/s (142MB/s)(1374MiB/10127msec) 00:18:53.394 slat (usec): min=15, max=111978, avg=1731.18, stdev=7199.76 00:18:53.394 clat (msec): min=11, max=306, avg=115.95, stdev=41.21 00:18:53.394 lat (msec): min=11, max=325, avg=117.68, stdev=42.18 00:18:53.394 clat percentiles (msec): 00:18:53.395 | 1.00th=[ 32], 5.00th=[ 57], 10.00th=[ 67], 20.00th=[ 79], 00:18:53.395 | 30.00th=[ 90], 40.00th=[ 100], 50.00th=[ 109], 60.00th=[ 125], 00:18:53.395 | 70.00th=[ 146], 80.00th=[ 155], 90.00th=[ 169], 95.00th=[ 186], 00:18:53.395 | 99.00th=[ 211], 99.50th=[ 224], 99.90th=[ 257], 99.95th=[ 305], 00:18:53.395 | 99.99th=[ 309] 00:18:53.395 bw ( KiB/s): min=86528, max=232960, per=9.58%, avg=139019.25, stdev=45491.56, samples=20 00:18:53.395 iops : min= 338, max= 910, avg=543.00, stdev=177.70, samples=20 00:18:53.395 lat (msec) : 20=0.35%, 50=2.80%, 100=38.41%, 250=58.19%, 500=0.25% 00:18:53.395 cpu : usr=0.17%, sys=1.78%, ctx=1055, majf=0, minf=4097 00:18:53.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:53.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.395 issued rwts: total=5496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.395 job1: (groupid=0, jobs=1): err= 0: pid=80495: Sun Nov 17 22:18:47 2024 00:18:53.395 read: IOPS=466, BW=117MiB/s (122MB/s)(1176MiB/10082msec) 00:18:53.395 slat (usec): min=14, max=129194, avg=2028.99, stdev=8291.10 00:18:53.395 clat (usec): min=828, max=375825, avg=134882.25, stdev=65190.78 00:18:53.395 lat (usec): min=1381, max=375865, avg=136911.23, stdev=66504.61 00:18:53.395 clat percentiles (msec): 00:18:53.395 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 48], 20.00th=[ 73], 00:18:53.395 | 30.00th=[ 90], 40.00th=[ 144], 50.00th=[ 155], 60.00th=[ 163], 00:18:53.395 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 207], 95.00th=[ 220], 00:18:53.395 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 376], 99.95th=[ 376], 00:18:53.395 | 99.99th=[ 376] 00:18:53.395 bw ( KiB/s): min=71168, max=229888, per=8.19%, avg=118778.25, stdev=45266.71, samples=20 00:18:53.395 iops : min= 278, max= 898, avg=463.75, stdev=176.74, samples=20 00:18:53.395 lat (usec) : 1000=0.02% 00:18:53.395 lat (msec) : 2=0.26%, 4=3.08%, 10=3.08%, 20=2.25%, 50=1.79% 00:18:53.395 lat (msec) : 100=21.32%, 250=66.59%, 500=1.62% 00:18:53.395 cpu : usr=0.21%, sys=1.62%, ctx=848, majf=0, minf=4097 00:18:53.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:53.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.395 issued rwts: total=4705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.395 job2: (groupid=0, jobs=1): err= 0: pid=80496: Sun Nov 17 22:18:47 2024 00:18:53.395 read: IOPS=444, BW=111MiB/s (116MB/s)(1116MiB/10052msec) 00:18:53.395 slat (usec): min=14, max=146310, avg=2079.11, stdev=10325.55 00:18:53.395 clat (msec): min=5, max=318, avg=141.68, stdev=41.44 00:18:53.395 lat (msec): min=5, max=331, avg=143.76, stdev=43.24 00:18:53.395 clat percentiles (msec): 00:18:53.395 | 1.00th=[ 20], 5.00th=[ 66], 10.00th=[ 78], 20.00th=[ 108], 00:18:53.395 | 30.00th=[ 131], 40.00th=[ 144], 50.00th=[ 148], 60.00th=[ 155], 00:18:53.395 | 70.00th=[ 163], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 201], 00:18:53.395 | 99.00th=[ 215], 99.50th=[ 224], 99.90th=[ 275], 99.95th=[ 279], 00:18:53.395 | 99.99th=[ 317] 00:18:53.395 bw ( KiB/s): min=70797, max=230912, per=7.77%, avg=112726.15, stdev=35281.36, samples=20 00:18:53.395 iops : min= 276, max= 902, avg=439.95, stdev=137.85, samples=20 00:18:53.395 lat (msec) : 10=0.47%, 20=0.78%, 50=0.99%, 100=13.71%, 250=83.87% 00:18:53.395 lat (msec) : 500=0.18% 00:18:53.395 cpu : usr=0.17%, sys=1.66%, ctx=828, majf=0, minf=4097 00:18:53.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:53.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.395 issued rwts: total=4465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.395 job3: (groupid=0, jobs=1): err= 0: pid=80497: Sun Nov 17 22:18:47 2024 00:18:53.395 read: IOPS=626, BW=157MiB/s (164MB/s)(1569MiB/10018msec) 00:18:53.395 slat (usec): min=18, max=96347, avg=1515.93, stdev=6347.47 00:18:53.395 clat (msec): min=2, max=267, avg=100.44, stdev=51.02 00:18:53.395 lat (msec): min=3, max=269, avg=101.96, stdev=52.00 00:18:53.395 clat percentiles (msec): 00:18:53.395 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 35], 20.00th=[ 44], 00:18:53.395 | 30.00th=[ 70], 40.00th=[ 88], 50.00th=[ 107], 60.00th=[ 118], 00:18:53.395 | 70.00th=[ 124], 80.00th=[ 138], 90.00th=[ 169], 95.00th=[ 194], 00:18:53.395 | 99.00th=[ 226], 99.50th=[ 230], 99.90th=[ 236], 99.95th=[ 253], 00:18:53.395 | 99.99th=[ 268] 00:18:53.395 bw ( KiB/s): min=80384, max=446464, per=10.96%, avg=159004.65, stdev=83242.66, samples=20 00:18:53.395 iops : min= 314, max= 1744, avg=620.85, stdev=325.13, samples=20 00:18:53.395 lat (msec) : 4=0.05%, 10=1.05%, 20=0.62%, 50=22.21%, 100=22.31% 00:18:53.395 lat (msec) : 250=53.68%, 500=0.08% 00:18:53.395 cpu : usr=0.31%, sys=2.05%, ctx=1132, majf=0, minf=4097 00:18:53.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:53.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.395 issued rwts: total=6276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.395 job4: (groupid=0, jobs=1): err= 0: pid=80498: Sun Nov 17 22:18:47 2024 00:18:53.395 read: IOPS=428, BW=107MiB/s (112MB/s)(1078MiB/10073msec) 00:18:53.395 slat (usec): min=15, max=110741, avg=2265.84, stdev=8565.66 00:18:53.395 clat (msec): min=24, max=265, avg=146.98, stdev=33.65 00:18:53.395 lat (msec): min=24, max=294, avg=149.24, stdev=35.05 00:18:53.395 clat percentiles (msec): 00:18:53.395 | 1.00th=[ 70], 5.00th=[ 90], 10.00th=[ 101], 20.00th=[ 118], 00:18:53.395 | 30.00th=[ 134], 40.00th=[ 144], 50.00th=[ 150], 60.00th=[ 157], 00:18:53.395 | 70.00th=[ 163], 80.00th=[ 171], 90.00th=[ 192], 95.00th=[ 203], 00:18:53.395 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 247], 99.95th=[ 262], 00:18:53.395 | 99.99th=[ 266] 00:18:53.395 bw ( KiB/s): min=71680, max=164681, per=7.50%, avg=108805.25, stdev=23212.24, samples=20 00:18:53.395 iops : min= 280, max= 643, avg=424.70, stdev=90.70, samples=20 00:18:53.395 lat (msec) : 50=0.30%, 100=10.67%, 250=88.94%, 500=0.09% 00:18:53.395 cpu : usr=0.11%, sys=1.47%, ctx=792, majf=0, minf=4097 00:18:53.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:53.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.395 issued rwts: total=4312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.395 job5: (groupid=0, jobs=1): err= 0: pid=80499: Sun Nov 17 22:18:47 2024 00:18:53.395 read: IOPS=564, BW=141MiB/s (148MB/s)(1431MiB/10129msec) 00:18:53.395 slat (usec): min=16, max=100588, avg=1705.56, stdev=6938.66 00:18:53.395 clat (msec): min=7, max=313, avg=111.39, stdev=51.09 00:18:53.395 lat (msec): min=7, max=313, avg=113.10, stdev=52.10 00:18:53.395 clat percentiles (msec): 00:18:53.395 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 39], 20.00th=[ 57], 00:18:53.395 | 30.00th=[ 83], 40.00th=[ 100], 50.00th=[ 113], 60.00th=[ 138], 00:18:53.395 | 70.00th=[ 148], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 186], 00:18:53.395 | 99.00th=[ 201], 99.50th=[ 218], 99.90th=[ 296], 99.95th=[ 313], 00:18:53.395 | 99.99th=[ 313] 00:18:53.395 bw ( KiB/s): min=85504, max=347136, per=9.99%, avg=144933.10, stdev=71839.83, samples=20 00:18:53.395 iops : min= 334, max= 1356, avg=565.80, stdev=280.79, samples=20 00:18:53.395 lat (msec) : 10=0.47%, 20=2.13%, 50=13.81%, 100=25.10%, 250=58.11% 00:18:53.395 lat (msec) : 500=0.38% 00:18:53.395 cpu : usr=0.20%, sys=1.80%, ctx=1068, majf=0, minf=4097 00:18:53.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:53.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.395 issued rwts: total=5722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.395 job6: (groupid=0, jobs=1): err= 0: pid=80500: Sun Nov 17 22:18:47 2024 00:18:53.395 read: IOPS=560, BW=140MiB/s (147MB/s)(1419MiB/10125msec) 00:18:53.395 slat (usec): min=15, max=127925, avg=1711.30, stdev=7327.58 00:18:53.395 clat (msec): min=9, max=261, avg=112.23, stdev=46.77 00:18:53.395 lat (msec): min=11, max=328, avg=113.94, stdev=47.88 00:18:53.395 clat percentiles (msec): 00:18:53.395 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 71], 00:18:53.395 | 30.00th=[ 96], 40.00th=[ 107], 50.00th=[ 118], 60.00th=[ 127], 00:18:53.395 | 70.00th=[ 136], 80.00th=[ 150], 90.00th=[ 174], 95.00th=[ 188], 00:18:53.395 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 259], 99.95th=[ 259], 00:18:53.395 | 99.99th=[ 262] 00:18:53.395 bw ( KiB/s): min=87040, max=396800, per=9.90%, avg=143609.25, stdev=65082.29, samples=20 00:18:53.395 iops : min= 340, max= 1550, avg=560.95, stdev=254.24, samples=20 00:18:53.395 lat (msec) : 10=0.02%, 20=0.78%, 50=16.99%, 100=16.85%, 250=65.25% 00:18:53.395 lat (msec) : 500=0.12% 00:18:53.395 cpu : usr=0.23%, sys=1.73%, ctx=1098, majf=0, minf=4097 00:18:53.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:53.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.395 issued rwts: total=5675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.395 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.395 job7: (groupid=0, jobs=1): err= 0: pid=80501: Sun Nov 17 22:18:47 2024 00:18:53.395 read: IOPS=477, BW=119MiB/s (125MB/s)(1200MiB/10059msec) 00:18:53.395 slat (usec): min=20, max=77851, avg=2041.77, stdev=7840.18 00:18:53.395 clat (msec): min=14, max=240, avg=131.80, stdev=41.71 00:18:53.395 lat (msec): min=14, max=277, avg=133.84, stdev=42.99 00:18:53.395 clat percentiles (msec): 00:18:53.395 | 1.00th=[ 23], 5.00th=[ 58], 10.00th=[ 73], 20.00th=[ 109], 00:18:53.396 | 30.00th=[ 116], 40.00th=[ 123], 50.00th=[ 129], 60.00th=[ 136], 00:18:53.396 | 70.00th=[ 150], 80.00th=[ 169], 90.00th=[ 188], 95.00th=[ 205], 00:18:53.396 | 99.00th=[ 222], 99.50th=[ 224], 99.90th=[ 228], 99.95th=[ 228], 00:18:53.396 | 99.99th=[ 241] 00:18:53.396 bw ( KiB/s): min=84480, max=221696, per=8.36%, avg=121242.20, stdev=30966.65, samples=20 00:18:53.396 iops : min= 330, max= 866, avg=473.40, stdev=121.01, samples=20 00:18:53.396 lat (msec) : 20=0.83%, 50=3.33%, 100=11.33%, 250=84.50% 00:18:53.396 cpu : usr=0.14%, sys=1.77%, ctx=914, majf=0, minf=4097 00:18:53.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:53.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.396 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.396 job8: (groupid=0, jobs=1): err= 0: pid=80502: Sun Nov 17 22:18:47 2024 00:18:53.396 read: IOPS=669, BW=167MiB/s (175MB/s)(1695MiB/10135msec) 00:18:53.396 slat (usec): min=14, max=139960, avg=1429.41, stdev=6238.82 00:18:53.396 clat (msec): min=3, max=334, avg=94.00, stdev=51.73 00:18:53.396 lat (msec): min=3, max=335, avg=95.43, stdev=52.73 00:18:53.396 clat percentiles (msec): 00:18:53.396 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 40], 00:18:53.396 | 30.00th=[ 57], 40.00th=[ 74], 50.00th=[ 88], 60.00th=[ 105], 00:18:53.396 | 70.00th=[ 128], 80.00th=[ 150], 90.00th=[ 163], 95.00th=[ 171], 00:18:53.396 | 99.00th=[ 205], 99.50th=[ 239], 99.90th=[ 317], 99.95th=[ 317], 00:18:53.396 | 99.99th=[ 334] 00:18:53.396 bw ( KiB/s): min=81408, max=385795, per=11.86%, avg=172019.80, stdev=84694.45, samples=20 00:18:53.396 iops : min= 318, max= 1507, avg=671.70, stdev=330.94, samples=20 00:18:53.396 lat (msec) : 4=0.12%, 10=0.46%, 20=1.18%, 50=26.50%, 100=28.15% 00:18:53.396 lat (msec) : 250=43.16%, 500=0.43% 00:18:53.396 cpu : usr=0.27%, sys=2.03%, ctx=1335, majf=0, minf=4097 00:18:53.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:53.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.396 issued rwts: total=6781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.396 job9: (groupid=0, jobs=1): err= 0: pid=80503: Sun Nov 17 22:18:47 2024 00:18:53.396 read: IOPS=485, BW=121MiB/s (127MB/s)(1230MiB/10134msec) 00:18:53.396 slat (usec): min=17, max=71415, avg=1963.11, stdev=6998.18 00:18:53.396 clat (msec): min=29, max=300, avg=129.66, stdev=31.95 00:18:53.396 lat (msec): min=29, max=301, avg=131.62, stdev=32.81 00:18:53.396 clat percentiles (msec): 00:18:53.396 | 1.00th=[ 81], 5.00th=[ 91], 10.00th=[ 100], 20.00th=[ 108], 00:18:53.396 | 30.00th=[ 112], 40.00th=[ 118], 50.00th=[ 124], 60.00th=[ 129], 00:18:53.396 | 70.00th=[ 136], 80.00th=[ 146], 90.00th=[ 176], 95.00th=[ 194], 00:18:53.396 | 99.00th=[ 236], 99.50th=[ 255], 99.90th=[ 300], 99.95th=[ 300], 00:18:53.396 | 99.99th=[ 300] 00:18:53.396 bw ( KiB/s): min=68608, max=159038, per=8.57%, avg=124300.90, stdev=24435.58, samples=20 00:18:53.396 iops : min= 268, max= 621, avg=485.35, stdev=95.43, samples=20 00:18:53.396 lat (msec) : 50=0.10%, 100=10.79%, 250=88.33%, 500=0.77% 00:18:53.396 cpu : usr=0.18%, sys=1.64%, ctx=1029, majf=0, minf=4097 00:18:53.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:53.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.396 issued rwts: total=4919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.396 job10: (groupid=0, jobs=1): err= 0: pid=80504: Sun Nov 17 22:18:47 2024 00:18:53.396 read: IOPS=424, BW=106MiB/s (111MB/s)(1071MiB/10081msec) 00:18:53.396 slat (usec): min=14, max=83437, avg=2271.80, stdev=8340.81 00:18:53.396 clat (usec): min=1495, max=271479, avg=148018.81, stdev=36772.14 00:18:53.396 lat (usec): min=1529, max=278526, avg=150290.62, stdev=37929.49 00:18:53.396 clat percentiles (msec): 00:18:53.396 | 1.00th=[ 8], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 126], 00:18:53.396 | 30.00th=[ 131], 40.00th=[ 136], 50.00th=[ 144], 60.00th=[ 150], 00:18:53.396 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 199], 95.00th=[ 211], 00:18:53.396 | 99.00th=[ 230], 99.50th=[ 234], 99.90th=[ 245], 99.95th=[ 247], 00:18:53.396 | 99.99th=[ 271] 00:18:53.396 bw ( KiB/s): min=75776, max=137964, per=7.45%, avg=108028.10, stdev=17031.01, samples=20 00:18:53.396 iops : min= 296, max= 538, avg=421.75, stdev=66.55, samples=20 00:18:53.396 lat (msec) : 2=0.05%, 4=0.54%, 10=0.84%, 50=0.89%, 100=1.33% 00:18:53.396 lat (msec) : 250=96.31%, 500=0.05% 00:18:53.396 cpu : usr=0.15%, sys=1.43%, ctx=845, majf=0, minf=4097 00:18:53.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:53.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.396 issued rwts: total=4284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.396 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.396 00:18:53.396 Run status group 0 (all jobs): 00:18:53.396 READ: bw=1417MiB/s (1486MB/s), 106MiB/s-167MiB/s (111MB/s-175MB/s), io=14.0GiB (15.1GB), run=10018-10135msec 00:18:53.396 00:18:53.396 Disk stats (read/write): 00:18:53.396 nvme0n1: ios=10864/0, merge=0/0, ticks=1232860/0, in_queue=1232860, util=97.02% 00:18:53.396 nvme10n1: ios=9287/0, merge=0/0, ticks=1240271/0, in_queue=1240271, util=97.66% 00:18:53.396 nvme1n1: ios=8803/0, merge=0/0, ticks=1243846/0, in_queue=1243846, util=97.83% 00:18:53.396 nvme2n1: ios=12480/0, merge=0/0, ticks=1242437/0, in_queue=1242437, util=97.97% 00:18:53.396 nvme3n1: ios=8497/0, merge=0/0, ticks=1241384/0, in_queue=1241384, util=97.92% 00:18:53.396 nvme4n1: ios=11316/0, merge=0/0, ticks=1232796/0, in_queue=1232796, util=98.08% 00:18:53.396 nvme5n1: ios=11222/0, merge=0/0, ticks=1234041/0, in_queue=1234041, util=97.96% 00:18:53.396 nvme6n1: ios=9493/0, merge=0/0, ticks=1243896/0, in_queue=1243896, util=98.45% 00:18:53.396 nvme7n1: ios=13449/0, merge=0/0, ticks=1229909/0, in_queue=1229909, util=98.10% 00:18:53.396 nvme8n1: ios=9710/0, merge=0/0, ticks=1229740/0, in_queue=1229740, util=98.35% 00:18:53.396 nvme9n1: ios=8466/0, merge=0/0, ticks=1241624/0, in_queue=1241624, util=98.86% 00:18:53.396 22:18:48 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:53.396 [global] 00:18:53.396 thread=1 00:18:53.396 invalidate=1 00:18:53.396 rw=randwrite 00:18:53.396 time_based=1 00:18:53.396 runtime=10 00:18:53.396 ioengine=libaio 00:18:53.396 direct=1 00:18:53.396 bs=262144 00:18:53.396 iodepth=64 00:18:53.396 norandommap=1 00:18:53.396 numjobs=1 00:18:53.396 00:18:53.396 [job0] 00:18:53.396 filename=/dev/nvme0n1 00:18:53.396 [job1] 00:18:53.396 filename=/dev/nvme10n1 00:18:53.396 [job2] 00:18:53.396 filename=/dev/nvme1n1 00:18:53.396 [job3] 00:18:53.396 filename=/dev/nvme2n1 00:18:53.396 [job4] 00:18:53.396 filename=/dev/nvme3n1 00:18:53.396 [job5] 00:18:53.396 filename=/dev/nvme4n1 00:18:53.396 [job6] 00:18:53.396 filename=/dev/nvme5n1 00:18:53.396 [job7] 00:18:53.396 filename=/dev/nvme6n1 00:18:53.396 [job8] 00:18:53.396 filename=/dev/nvme7n1 00:18:53.396 [job9] 00:18:53.396 filename=/dev/nvme8n1 00:18:53.396 [job10] 00:18:53.396 filename=/dev/nvme9n1 00:18:53.396 Could not set queue depth (nvme0n1) 00:18:53.396 Could not set queue depth (nvme10n1) 00:18:53.396 Could not set queue depth (nvme1n1) 00:18:53.396 Could not set queue depth (nvme2n1) 00:18:53.396 Could not set queue depth (nvme3n1) 00:18:53.396 Could not set queue depth (nvme4n1) 00:18:53.396 Could not set queue depth (nvme5n1) 00:18:53.396 Could not set queue depth (nvme6n1) 00:18:53.396 Could not set queue depth (nvme7n1) 00:18:53.396 Could not set queue depth (nvme8n1) 00:18:53.396 Could not set queue depth (nvme9n1) 00:18:53.396 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.396 fio-3.35 00:18:53.396 Starting 11 threads 00:19:03.382 00:19:03.383 job0: (groupid=0, jobs=1): err= 0: pid=80699: Sun Nov 17 22:18:58 2024 00:19:03.383 write: IOPS=344, BW=86.1MiB/s (90.3MB/s)(875MiB/10165msec); 0 zone resets 00:19:03.383 slat (usec): min=21, max=51329, avg=2854.34, stdev=4996.70 00:19:03.383 clat (msec): min=53, max=364, avg=182.88, stdev=26.65 00:19:03.383 lat (msec): min=53, max=364, avg=185.74, stdev=26.58 00:19:03.383 clat percentiles (msec): 00:19:03.383 | 1.00th=[ 134], 5.00th=[ 148], 10.00th=[ 157], 20.00th=[ 163], 00:19:03.383 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 194], 00:19:03.383 | 70.00th=[ 201], 80.00th=[ 209], 90.00th=[ 211], 95.00th=[ 213], 00:19:03.383 | 99.00th=[ 249], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 363], 00:19:03.383 | 99.99th=[ 363] 00:19:03.383 bw ( KiB/s): min=75776, max=104960, per=7.91%, avg=88003.35, stdev=9514.33, samples=20 00:19:03.383 iops : min= 296, max= 410, avg=343.75, stdev=37.16, samples=20 00:19:03.383 lat (msec) : 100=0.57%, 250=98.46%, 500=0.97% 00:19:03.383 cpu : usr=0.84%, sys=0.84%, ctx=4918, majf=0, minf=1 00:19:03.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:03.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.383 issued rwts: total=0,3501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.383 job1: (groupid=0, jobs=1): err= 0: pid=80700: Sun Nov 17 22:18:58 2024 00:19:03.383 write: IOPS=279, BW=69.8MiB/s (73.2MB/s)(709MiB/10165msec); 0 zone resets 00:19:03.383 slat (usec): min=18, max=147110, avg=3518.59, stdev=7374.78 00:19:03.383 clat (msec): min=150, max=371, avg=225.69, stdev=39.08 00:19:03.383 lat (msec): min=151, max=371, avg=229.20, stdev=38.93 00:19:03.383 clat percentiles (msec): 00:19:03.383 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:19:03.383 | 30.00th=[ 199], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 234], 00:19:03.383 | 70.00th=[ 247], 80.00th=[ 257], 90.00th=[ 284], 95.00th=[ 300], 00:19:03.383 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 372], 99.95th=[ 372], 00:19:03.383 | 99.99th=[ 372] 00:19:03.383 bw ( KiB/s): min=40960, max=86528, per=6.38%, avg=71014.40, stdev=12488.58, samples=20 00:19:03.383 iops : min= 160, max= 338, avg=277.40, stdev=48.78, samples=20 00:19:03.383 lat (msec) : 250=76.14%, 500=23.86% 00:19:03.383 cpu : usr=0.79%, sys=0.95%, ctx=2669, majf=0, minf=1 00:19:03.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:19:03.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.383 issued rwts: total=0,2837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.383 job2: (groupid=0, jobs=1): err= 0: pid=80713: Sun Nov 17 22:18:58 2024 00:19:03.383 write: IOPS=330, BW=82.7MiB/s (86.7MB/s)(842MiB/10172msec); 0 zone resets 00:19:03.383 slat (usec): min=18, max=90117, avg=2877.32, stdev=5514.99 00:19:03.383 clat (msec): min=11, max=374, avg=190.41, stdev=44.02 00:19:03.383 lat (msec): min=11, max=374, avg=193.29, stdev=44.42 00:19:03.383 clat percentiles (msec): 00:19:03.383 | 1.00th=[ 54], 5.00th=[ 148], 10.00th=[ 161], 20.00th=[ 165], 00:19:03.383 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 194], 60.00th=[ 201], 00:19:03.383 | 70.00th=[ 209], 80.00th=[ 211], 90.00th=[ 215], 95.00th=[ 284], 00:19:03.383 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 376], 00:19:03.383 | 99.99th=[ 376] 00:19:03.383 bw ( KiB/s): min=57344, max=113152, per=7.60%, avg=84556.80, stdev=13743.70, samples=20 00:19:03.383 iops : min= 224, max= 442, avg=330.30, stdev=53.69, samples=20 00:19:03.383 lat (msec) : 20=0.18%, 50=0.74%, 100=2.88%, 250=88.12%, 500=8.08% 00:19:03.383 cpu : usr=0.77%, sys=0.90%, ctx=3123, majf=0, minf=1 00:19:03.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:03.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.383 issued rwts: total=0,3366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.383 job3: (groupid=0, jobs=1): err= 0: pid=80714: Sun Nov 17 22:18:58 2024 00:19:03.383 write: IOPS=279, BW=69.9MiB/s (73.3MB/s)(711MiB/10170msec); 0 zone resets 00:19:03.383 slat (usec): min=20, max=84014, avg=3512.82, stdev=7043.00 00:19:03.383 clat (msec): min=27, max=363, avg=225.17, stdev=39.30 00:19:03.383 lat (msec): min=27, max=363, avg=228.68, stdev=39.17 00:19:03.383 clat percentiles (msec): 00:19:03.383 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:19:03.383 | 30.00th=[ 197], 40.00th=[ 201], 50.00th=[ 211], 60.00th=[ 239], 00:19:03.383 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 279], 95.00th=[ 292], 00:19:03.383 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 363], 00:19:03.383 | 99.99th=[ 363] 00:19:03.383 bw ( KiB/s): min=51815, max=86016, per=6.40%, avg=71224.35, stdev=11433.30, samples=20 00:19:03.383 iops : min= 202, max= 336, avg=278.20, stdev=44.70, samples=20 00:19:03.383 lat (msec) : 50=0.18%, 250=69.67%, 500=30.16% 00:19:03.383 cpu : usr=0.65%, sys=0.62%, ctx=2933, majf=0, minf=1 00:19:03.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:19:03.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.383 issued rwts: total=0,2845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.383 job4: (groupid=0, jobs=1): err= 0: pid=80715: Sun Nov 17 22:18:58 2024 00:19:03.383 write: IOPS=278, BW=69.7MiB/s (73.1MB/s)(710MiB/10175msec); 0 zone resets 00:19:03.383 slat (usec): min=19, max=53419, avg=3519.76, stdev=6796.24 00:19:03.383 clat (msec): min=9, max=372, avg=225.82, stdev=39.51 00:19:03.383 lat (msec): min=9, max=372, avg=229.34, stdev=39.46 00:19:03.383 clat percentiles (msec): 00:19:03.383 | 1.00th=[ 104], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 197], 00:19:03.383 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 218], 60.00th=[ 243], 00:19:03.383 | 70.00th=[ 253], 80.00th=[ 259], 90.00th=[ 266], 95.00th=[ 284], 00:19:03.383 | 99.00th=[ 342], 99.50th=[ 347], 99.90th=[ 359], 99.95th=[ 372], 00:19:03.383 | 99.99th=[ 372] 00:19:03.383 bw ( KiB/s): min=55296, max=83968, per=6.38%, avg=71040.00, stdev=9389.34, samples=20 00:19:03.383 iops : min= 216, max= 328, avg=277.50, stdev=36.68, samples=20 00:19:03.383 lat (msec) : 10=0.11%, 50=0.11%, 100=0.70%, 250=66.53%, 500=32.56% 00:19:03.383 cpu : usr=0.83%, sys=0.82%, ctx=1980, majf=0, minf=1 00:19:03.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:19:03.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.383 issued rwts: total=0,2838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.383 job5: (groupid=0, jobs=1): err= 0: pid=80716: Sun Nov 17 22:18:58 2024 00:19:03.383 write: IOPS=345, BW=86.4MiB/s (90.6MB/s)(879MiB/10171msec); 0 zone resets 00:19:03.383 slat (usec): min=19, max=16879, avg=2838.52, stdev=4924.03 00:19:03.383 clat (msec): min=23, max=373, avg=182.13, stdev=29.05 00:19:03.383 lat (msec): min=23, max=373, avg=184.96, stdev=29.06 00:19:03.383 clat percentiles (msec): 00:19:03.383 | 1.00th=[ 94], 5.00th=[ 148], 10.00th=[ 157], 20.00th=[ 163], 00:19:03.383 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 194], 00:19:03.383 | 70.00th=[ 201], 80.00th=[ 209], 90.00th=[ 211], 95.00th=[ 213], 00:19:03.383 | 99.00th=[ 257], 99.50th=[ 321], 99.90th=[ 363], 99.95th=[ 376], 00:19:03.383 | 99.99th=[ 376] 00:19:03.383 bw ( KiB/s): min=76800, max=106496, per=7.95%, avg=88432.50, stdev=10071.21, samples=20 00:19:03.383 iops : min= 300, max= 416, avg=345.40, stdev=39.29, samples=20 00:19:03.383 lat (msec) : 50=0.45%, 100=0.57%, 250=97.90%, 500=1.08% 00:19:03.383 cpu : usr=0.66%, sys=1.19%, ctx=4196, majf=0, minf=1 00:19:03.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:03.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.383 issued rwts: total=0,3517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.383 job6: (groupid=0, jobs=1): err= 0: pid=80717: Sun Nov 17 22:18:58 2024 00:19:03.383 write: IOPS=346, BW=86.5MiB/s (90.7MB/s)(880MiB/10167msec); 0 zone resets 00:19:03.383 slat (usec): min=20, max=16492, avg=2837.16, stdev=4920.09 00:19:03.383 clat (msec): min=14, max=375, avg=182.05, stdev=29.27 00:19:03.384 lat (msec): min=15, max=375, avg=184.88, stdev=29.29 00:19:03.384 clat percentiles (msec): 00:19:03.384 | 1.00th=[ 95], 5.00th=[ 148], 10.00th=[ 155], 20.00th=[ 163], 00:19:03.384 | 30.00th=[ 165], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 194], 00:19:03.384 | 70.00th=[ 201], 80.00th=[ 209], 90.00th=[ 211], 95.00th=[ 213], 00:19:03.384 | 99.00th=[ 259], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 376], 00:19:03.384 | 99.99th=[ 376] 00:19:03.384 bw ( KiB/s): min=76288, max=104448, per=7.95%, avg=88458.30, stdev=10010.14, samples=20 00:19:03.384 iops : min= 298, max= 408, avg=345.50, stdev=39.04, samples=20 00:19:03.384 lat (msec) : 20=0.03%, 50=0.45%, 100=0.57%, 250=97.87%, 500=1.08% 00:19:03.384 cpu : usr=0.51%, sys=1.25%, ctx=4351, majf=0, minf=1 00:19:03.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:03.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.384 issued rwts: total=0,3518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.384 job7: (groupid=0, jobs=1): err= 0: pid=80718: Sun Nov 17 22:18:58 2024 00:19:03.384 write: IOPS=287, BW=71.8MiB/s (75.3MB/s)(730MiB/10168msec); 0 zone resets 00:19:03.384 slat (usec): min=24, max=86344, avg=3408.11, stdev=6939.08 00:19:03.384 clat (msec): min=3, max=373, avg=219.27, stdev=56.85 00:19:03.384 lat (msec): min=3, max=373, avg=222.68, stdev=57.26 00:19:03.384 clat percentiles (msec): 00:19:03.384 | 1.00th=[ 29], 5.00th=[ 47], 10.00th=[ 186], 20.00th=[ 197], 00:19:03.384 | 30.00th=[ 203], 40.00th=[ 205], 50.00th=[ 211], 60.00th=[ 236], 00:19:03.384 | 70.00th=[ 253], 80.00th=[ 257], 90.00th=[ 271], 95.00th=[ 292], 00:19:03.384 | 99.00th=[ 368], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:19:03.384 | 99.99th=[ 376] 00:19:03.384 bw ( KiB/s): min=45056, max=115430, per=6.58%, avg=73176.30, stdev=13979.20, samples=20 00:19:03.384 iops : min= 176, max= 450, avg=285.80, stdev=54.46, samples=20 00:19:03.384 lat (msec) : 4=0.07%, 10=0.07%, 20=0.48%, 50=4.62%, 100=0.03% 00:19:03.384 lat (msec) : 250=63.44%, 500=31.29% 00:19:03.384 cpu : usr=0.73%, sys=1.08%, ctx=3026, majf=0, minf=1 00:19:03.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:19:03.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.384 issued rwts: total=0,2921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.384 job8: (groupid=0, jobs=1): err= 0: pid=80719: Sun Nov 17 22:18:58 2024 00:19:03.384 write: IOPS=1287, BW=322MiB/s (338MB/s)(3233MiB/10041msec); 0 zone resets 00:19:03.384 slat (usec): min=17, max=105292, avg=751.10, stdev=1975.81 00:19:03.384 clat (usec): min=1665, max=335759, avg=48918.91, stdev=32336.12 00:19:03.384 lat (msec): min=2, max=336, avg=49.67, stdev=32.74 00:19:03.384 clat percentiles (msec): 00:19:03.384 | 1.00th=[ 15], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:19:03.384 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:19:03.384 | 70.00th=[ 47], 80.00th=[ 47], 90.00th=[ 48], 95.00th=[ 48], 00:19:03.384 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 321], 99.95th=[ 326], 00:19:03.384 | 99.99th=[ 334] 00:19:03.384 bw ( KiB/s): min=61440, max=373760, per=29.60%, avg=329446.40, stdev=86287.21, samples=20 00:19:03.384 iops : min= 240, max= 1460, avg=1286.90, stdev=337.06, samples=20 00:19:03.384 lat (msec) : 2=0.01%, 4=0.07%, 10=0.51%, 20=0.80%, 50=96.03% 00:19:03.384 lat (msec) : 100=0.77%, 250=0.36%, 500=1.46% 00:19:03.384 cpu : usr=1.67%, sys=2.91%, ctx=16768, majf=0, minf=2 00:19:03.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:03.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.384 issued rwts: total=0,12932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.384 job9: (groupid=0, jobs=1): err= 0: pid=80720: Sun Nov 17 22:18:58 2024 00:19:03.384 write: IOPS=270, BW=67.7MiB/s (71.0MB/s)(689MiB/10173msec); 0 zone resets 00:19:03.384 slat (usec): min=14, max=68719, avg=3625.26, stdev=7134.53 00:19:03.384 clat (msec): min=29, max=361, avg=232.51, stdev=39.60 00:19:03.384 lat (msec): min=29, max=361, avg=236.13, stdev=39.44 00:19:03.384 clat percentiles (msec): 00:19:03.384 | 1.00th=[ 138], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 201], 00:19:03.384 | 30.00th=[ 205], 40.00th=[ 211], 50.00th=[ 218], 60.00th=[ 249], 00:19:03.384 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 296], 00:19:03.384 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 363], 00:19:03.384 | 99.99th=[ 363] 00:19:03.384 bw ( KiB/s): min=51200, max=81920, per=6.19%, avg=68915.20, stdev=10084.41, samples=20 00:19:03.384 iops : min= 200, max= 320, avg=269.20, stdev=39.39, samples=20 00:19:03.384 lat (msec) : 50=0.11%, 100=0.58%, 250=60.20%, 500=39.11% 00:19:03.384 cpu : usr=0.87%, sys=0.92%, ctx=2466, majf=0, minf=1 00:19:03.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:03.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.384 issued rwts: total=0,2756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.384 job10: (groupid=0, jobs=1): err= 0: pid=80721: Sun Nov 17 22:18:58 2024 00:19:03.384 write: IOPS=314, BW=78.7MiB/s (82.5MB/s)(801MiB/10174msec); 0 zone resets 00:19:03.384 slat (usec): min=20, max=38774, avg=3014.54, stdev=5502.79 00:19:03.384 clat (msec): min=9, max=380, avg=200.18, stdev=37.13 00:19:03.384 lat (msec): min=9, max=380, avg=203.19, stdev=37.35 00:19:03.384 clat percentiles (msec): 00:19:03.384 | 1.00th=[ 45], 5.00th=[ 146], 10.00th=[ 157], 20.00th=[ 194], 00:19:03.384 | 30.00th=[ 201], 40.00th=[ 207], 50.00th=[ 209], 60.00th=[ 211], 00:19:03.384 | 70.00th=[ 213], 80.00th=[ 215], 90.00th=[ 222], 95.00th=[ 239], 00:19:03.384 | 99.00th=[ 288], 99.50th=[ 330], 99.90th=[ 368], 99.95th=[ 380], 00:19:03.384 | 99.99th=[ 380] 00:19:03.384 bw ( KiB/s): min=65024, max=106496, per=7.23%, avg=80448.50, stdev=10421.42, samples=20 00:19:03.384 iops : min= 254, max= 416, avg=314.00, stdev=40.67, samples=20 00:19:03.384 lat (msec) : 10=0.19%, 20=0.16%, 50=0.94%, 100=1.72%, 250=93.88% 00:19:03.384 lat (msec) : 500=3.12% 00:19:03.384 cpu : usr=0.90%, sys=0.88%, ctx=3911, majf=0, minf=1 00:19:03.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:03.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.384 issued rwts: total=0,3203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.384 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.384 00:19:03.384 Run status group 0 (all jobs): 00:19:03.384 WRITE: bw=1087MiB/s (1140MB/s), 67.7MiB/s-322MiB/s (71.0MB/s-338MB/s), io=10.8GiB (11.6GB), run=10041-10175msec 00:19:03.384 00:19:03.384 Disk stats (read/write): 00:19:03.384 nvme0n1: ios=49/6864, merge=0/0, ticks=42/1208828, in_queue=1208870, util=97.72% 00:19:03.384 nvme10n1: ios=49/5544, merge=0/0, ticks=27/1208009, in_queue=1208036, util=97.99% 00:19:03.384 nvme1n1: ios=36/6604, merge=0/0, ticks=42/1209652, in_queue=1209694, util=98.04% 00:19:03.384 nvme2n1: ios=15/5556, merge=0/0, ticks=29/1208285, in_queue=1208314, util=98.01% 00:19:03.384 nvme3n1: ios=0/5553, merge=0/0, ticks=0/1209004, in_queue=1209004, util=98.15% 00:19:03.384 nvme4n1: ios=0/6905, merge=0/0, ticks=0/1209364, in_queue=1209364, util=98.22% 00:19:03.384 nvme5n1: ios=0/6907, merge=0/0, ticks=0/1208398, in_queue=1208398, util=98.33% 00:19:03.384 nvme6n1: ios=0/5706, merge=0/0, ticks=0/1207152, in_queue=1207152, util=98.35% 00:19:03.384 nvme7n1: ios=0/25738, merge=0/0, ticks=0/1220004, in_queue=1220004, util=98.79% 00:19:03.384 nvme8n1: ios=0/5377, merge=0/0, ticks=0/1207479, in_queue=1207479, util=98.79% 00:19:03.384 nvme9n1: ios=0/6286, merge=0/0, ticks=0/1212154, in_queue=1212154, util=99.05% 00:19:03.384 22:18:58 -- target/multiconnection.sh@36 -- # sync 00:19:03.384 22:18:58 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:03.384 22:18:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.384 22:18:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:03.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.384 22:18:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:03.384 22:18:58 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.384 22:18:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:03.384 22:18:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.384 22:18:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.384 22:18:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:03.384 22:18:58 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.385 22:18:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.385 22:18:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.385 22:18:58 -- common/autotest_common.sh@10 -- # set +x 00:19:03.385 22:18:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.385 22:18:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.385 22:18:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:03.385 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:03.385 22:18:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:03.385 22:18:59 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:03.385 22:18:59 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.385 22:18:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:03.385 22:18:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.385 22:18:59 -- common/autotest_common.sh@10 -- # set +x 00:19:03.385 22:18:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.385 22:18:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.385 22:18:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:03.385 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:03.385 22:18:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:03.385 22:18:59 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:03.385 22:18:59 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.385 22:18:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:03.385 22:18:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.385 22:18:59 -- common/autotest_common.sh@10 -- # set +x 00:19:03.385 22:18:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.385 22:18:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.385 22:18:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:03.385 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:03.385 22:18:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:03.385 22:18:59 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.385 22:18:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:03.385 22:18:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.385 22:18:59 -- common/autotest_common.sh@10 -- # set +x 00:19:03.385 22:18:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.385 22:18:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.385 22:18:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:03.385 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:03.385 22:18:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:03.385 22:18:59 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:03.385 22:18:59 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.385 22:18:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:03.385 22:18:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.385 22:18:59 -- common/autotest_common.sh@10 -- # set +x 00:19:03.385 22:18:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.385 22:18:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.385 22:18:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:03.385 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:03.385 22:18:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:03.385 22:18:59 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.385 22:18:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:03.385 22:18:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.385 22:18:59 -- common/autotest_common.sh@10 -- # set +x 00:19:03.385 22:18:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.385 22:18:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.385 22:18:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:03.385 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:03.385 22:18:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:03.385 22:18:59 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:03.385 22:18:59 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.385 22:18:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:03.385 22:18:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.385 22:18:59 -- common/autotest_common.sh@10 -- # set +x 00:19:03.385 22:18:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.385 22:18:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.385 22:18:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:03.385 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:03.385 22:18:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:03.385 22:18:59 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:03.385 22:18:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.385 22:18:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:03.645 22:18:59 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.645 22:18:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:03.645 22:18:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.645 22:18:59 -- common/autotest_common.sh@10 -- # set +x 00:19:03.645 22:19:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.645 22:19:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.645 22:19:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:03.645 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:03.645 22:19:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:03.645 22:19:00 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.645 22:19:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:03.645 22:19:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.645 22:19:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.645 22:19:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:03.645 22:19:00 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.645 22:19:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:03.645 22:19:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.645 22:19:00 -- common/autotest_common.sh@10 -- # set +x 00:19:03.645 22:19:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.645 22:19:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.645 22:19:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:03.645 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:03.645 22:19:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:03.645 22:19:00 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.645 22:19:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.645 22:19:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:03.645 22:19:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.645 22:19:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:03.645 22:19:00 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.645 22:19:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:03.645 22:19:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.645 22:19:00 -- common/autotest_common.sh@10 -- # set +x 00:19:03.905 22:19:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.905 22:19:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.905 22:19:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:03.905 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:03.905 22:19:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:03.905 22:19:00 -- common/autotest_common.sh@1208 -- # local i=0 00:19:03.905 22:19:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:03.905 22:19:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:03.905 22:19:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:03.905 22:19:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:03.905 22:19:00 -- common/autotest_common.sh@1220 -- # return 0 00:19:03.905 22:19:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:03.905 22:19:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.905 22:19:00 -- common/autotest_common.sh@10 -- # set +x 00:19:03.905 22:19:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.905 22:19:00 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:03.905 22:19:00 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:03.905 22:19:00 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:03.905 22:19:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:03.905 22:19:00 -- nvmf/common.sh@116 -- # sync 00:19:03.905 22:19:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:03.905 22:19:00 -- nvmf/common.sh@119 -- # set +e 00:19:03.905 22:19:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:03.905 22:19:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:03.905 rmmod nvme_tcp 00:19:03.905 rmmod nvme_fabrics 00:19:03.905 rmmod nvme_keyring 00:19:03.905 22:19:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:04.164 22:19:00 -- nvmf/common.sh@123 -- # set -e 00:19:04.164 22:19:00 -- nvmf/common.sh@124 -- # return 0 00:19:04.164 22:19:00 -- nvmf/common.sh@477 -- # '[' -n 80011 ']' 00:19:04.164 22:19:00 -- nvmf/common.sh@478 -- # killprocess 80011 00:19:04.164 22:19:00 -- common/autotest_common.sh@936 -- # '[' -z 80011 ']' 00:19:04.164 22:19:00 -- common/autotest_common.sh@940 -- # kill -0 80011 00:19:04.164 22:19:00 -- common/autotest_common.sh@941 -- # uname 00:19:04.164 22:19:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:04.164 22:19:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80011 00:19:04.164 22:19:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:04.164 22:19:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:04.164 killing process with pid 80011 00:19:04.164 22:19:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80011' 00:19:04.164 22:19:00 -- common/autotest_common.sh@955 -- # kill 80011 00:19:04.164 22:19:00 -- common/autotest_common.sh@960 -- # wait 80011 00:19:04.733 22:19:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:04.733 22:19:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:04.733 22:19:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:04.733 22:19:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.733 22:19:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:04.733 22:19:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.733 22:19:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.733 22:19:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.733 22:19:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:04.733 00:19:04.733 real 0m50.608s 00:19:04.733 user 2m57.918s 00:19:04.733 sys 0m19.688s 00:19:04.733 22:19:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:04.733 ************************************ 00:19:04.733 END TEST nvmf_multiconnection 00:19:04.733 22:19:01 -- common/autotest_common.sh@10 -- # set +x 00:19:04.733 ************************************ 00:19:04.733 22:19:01 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:04.733 22:19:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:04.733 22:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:04.733 22:19:01 -- common/autotest_common.sh@10 -- # set +x 00:19:04.992 ************************************ 00:19:04.992 START TEST nvmf_initiator_timeout 00:19:04.992 ************************************ 00:19:04.992 22:19:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:04.992 * Looking for test storage... 00:19:04.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:04.992 22:19:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:04.992 22:19:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:04.992 22:19:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:04.992 22:19:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:04.992 22:19:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:04.992 22:19:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:04.992 22:19:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:04.992 22:19:01 -- scripts/common.sh@335 -- # IFS=.-: 00:19:04.992 22:19:01 -- scripts/common.sh@335 -- # read -ra ver1 00:19:04.992 22:19:01 -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.992 22:19:01 -- scripts/common.sh@336 -- # read -ra ver2 00:19:04.992 22:19:01 -- scripts/common.sh@337 -- # local 'op=<' 00:19:04.992 22:19:01 -- scripts/common.sh@339 -- # ver1_l=2 00:19:04.992 22:19:01 -- scripts/common.sh@340 -- # ver2_l=1 00:19:04.992 22:19:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:04.992 22:19:01 -- scripts/common.sh@343 -- # case "$op" in 00:19:04.992 22:19:01 -- scripts/common.sh@344 -- # : 1 00:19:04.992 22:19:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:04.992 22:19:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.992 22:19:01 -- scripts/common.sh@364 -- # decimal 1 00:19:04.992 22:19:01 -- scripts/common.sh@352 -- # local d=1 00:19:04.992 22:19:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.992 22:19:01 -- scripts/common.sh@354 -- # echo 1 00:19:04.992 22:19:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:04.992 22:19:01 -- scripts/common.sh@365 -- # decimal 2 00:19:04.992 22:19:01 -- scripts/common.sh@352 -- # local d=2 00:19:04.992 22:19:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.992 22:19:01 -- scripts/common.sh@354 -- # echo 2 00:19:04.992 22:19:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:04.992 22:19:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:04.992 22:19:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:04.992 22:19:01 -- scripts/common.sh@367 -- # return 0 00:19:04.992 22:19:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.992 22:19:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:04.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.992 --rc genhtml_branch_coverage=1 00:19:04.992 --rc genhtml_function_coverage=1 00:19:04.992 --rc genhtml_legend=1 00:19:04.992 --rc geninfo_all_blocks=1 00:19:04.992 --rc geninfo_unexecuted_blocks=1 00:19:04.992 00:19:04.992 ' 00:19:04.992 22:19:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:04.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.992 --rc genhtml_branch_coverage=1 00:19:04.992 --rc genhtml_function_coverage=1 00:19:04.992 --rc genhtml_legend=1 00:19:04.992 --rc geninfo_all_blocks=1 00:19:04.992 --rc geninfo_unexecuted_blocks=1 00:19:04.992 00:19:04.992 ' 00:19:04.992 22:19:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:04.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.992 --rc genhtml_branch_coverage=1 00:19:04.992 --rc genhtml_function_coverage=1 00:19:04.992 --rc genhtml_legend=1 00:19:04.992 --rc geninfo_all_blocks=1 00:19:04.992 --rc geninfo_unexecuted_blocks=1 00:19:04.992 00:19:04.992 ' 00:19:04.992 22:19:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:04.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.992 --rc genhtml_branch_coverage=1 00:19:04.992 --rc genhtml_function_coverage=1 00:19:04.992 --rc genhtml_legend=1 00:19:04.992 --rc geninfo_all_blocks=1 00:19:04.992 --rc geninfo_unexecuted_blocks=1 00:19:04.992 00:19:04.992 ' 00:19:04.992 22:19:01 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:04.992 22:19:01 -- nvmf/common.sh@7 -- # uname -s 00:19:04.992 22:19:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.992 22:19:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.992 22:19:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.992 22:19:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.992 22:19:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.992 22:19:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.992 22:19:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.992 22:19:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.992 22:19:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.992 22:19:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.992 22:19:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:19:04.992 22:19:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:19:04.992 22:19:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.992 22:19:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.992 22:19:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:04.992 22:19:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:04.992 22:19:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.992 22:19:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.992 22:19:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.993 22:19:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.993 22:19:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.993 22:19:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.993 22:19:01 -- paths/export.sh@5 -- # export PATH 00:19:04.993 22:19:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.993 22:19:01 -- nvmf/common.sh@46 -- # : 0 00:19:04.993 22:19:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:04.993 22:19:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:04.993 22:19:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:04.993 22:19:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.993 22:19:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.993 22:19:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:04.993 22:19:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:04.993 22:19:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:04.993 22:19:01 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.993 22:19:01 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.993 22:19:01 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:04.993 22:19:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:04.993 22:19:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.993 22:19:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:04.993 22:19:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:04.993 22:19:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:04.993 22:19:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.993 22:19:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.993 22:19:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.993 22:19:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:04.993 22:19:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:04.993 22:19:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:04.993 22:19:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:04.993 22:19:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:04.993 22:19:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:04.993 22:19:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.993 22:19:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.993 22:19:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:04.993 22:19:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:04.993 22:19:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:04.993 22:19:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:04.993 22:19:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:04.993 22:19:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.993 22:19:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:04.993 22:19:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:04.993 22:19:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:04.993 22:19:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:04.993 22:19:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:04.993 22:19:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:04.993 Cannot find device "nvmf_tgt_br" 00:19:04.993 22:19:01 -- nvmf/common.sh@154 -- # true 00:19:04.993 22:19:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:04.993 Cannot find device "nvmf_tgt_br2" 00:19:04.993 22:19:01 -- nvmf/common.sh@155 -- # true 00:19:04.993 22:19:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:04.993 22:19:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:05.252 Cannot find device "nvmf_tgt_br" 00:19:05.252 22:19:01 -- nvmf/common.sh@157 -- # true 00:19:05.252 22:19:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:05.252 Cannot find device "nvmf_tgt_br2" 00:19:05.252 22:19:01 -- nvmf/common.sh@158 -- # true 00:19:05.252 22:19:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:05.252 22:19:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:05.252 22:19:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:05.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.252 22:19:01 -- nvmf/common.sh@161 -- # true 00:19:05.252 22:19:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:05.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.252 22:19:01 -- nvmf/common.sh@162 -- # true 00:19:05.252 22:19:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:05.252 22:19:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:05.252 22:19:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:05.252 22:19:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:05.252 22:19:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:05.252 22:19:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:05.252 22:19:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:05.252 22:19:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:05.252 22:19:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:05.252 22:19:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:05.252 22:19:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:05.252 22:19:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:05.252 22:19:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:05.252 22:19:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:05.252 22:19:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:05.252 22:19:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:05.252 22:19:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:05.252 22:19:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:05.252 22:19:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:05.252 22:19:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:05.252 22:19:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:05.252 22:19:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:05.252 22:19:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:05.511 22:19:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:05.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:19:05.511 00:19:05.511 --- 10.0.0.2 ping statistics --- 00:19:05.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.511 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:05.511 22:19:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:05.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:05.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:19:05.511 00:19:05.511 --- 10.0.0.3 ping statistics --- 00:19:05.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.511 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:05.511 22:19:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:05.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:05.511 00:19:05.511 --- 10.0.0.1 ping statistics --- 00:19:05.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.511 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:05.511 22:19:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.511 22:19:01 -- nvmf/common.sh@421 -- # return 0 00:19:05.511 22:19:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:05.511 22:19:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.511 22:19:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:05.511 22:19:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:05.511 22:19:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.511 22:19:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:05.511 22:19:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:05.511 22:19:01 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:05.511 22:19:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:05.511 22:19:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:05.511 22:19:01 -- common/autotest_common.sh@10 -- # set +x 00:19:05.511 22:19:01 -- nvmf/common.sh@469 -- # nvmfpid=81101 00:19:05.511 22:19:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:05.511 22:19:01 -- nvmf/common.sh@470 -- # waitforlisten 81101 00:19:05.511 22:19:01 -- common/autotest_common.sh@829 -- # '[' -z 81101 ']' 00:19:05.511 22:19:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.511 22:19:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:05.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.511 22:19:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.511 22:19:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:05.511 22:19:01 -- common/autotest_common.sh@10 -- # set +x 00:19:05.511 [2024-11-17 22:19:01.963755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:05.511 [2024-11-17 22:19:01.963852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.511 [2024-11-17 22:19:02.096159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.770 [2024-11-17 22:19:02.185424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:05.770 [2024-11-17 22:19:02.185573] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.770 [2024-11-17 22:19:02.185587] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.770 [2024-11-17 22:19:02.185594] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.770 [2024-11-17 22:19:02.185773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.770 [2024-11-17 22:19:02.186373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.770 [2024-11-17 22:19:02.186550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.770 [2024-11-17 22:19:02.186592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.338 22:19:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:06.338 22:19:02 -- common/autotest_common.sh@862 -- # return 0 00:19:06.338 22:19:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:06.338 22:19:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:06.338 22:19:02 -- common/autotest_common.sh@10 -- # set +x 00:19:06.338 22:19:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.338 22:19:02 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:06.338 22:19:02 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:06.597 22:19:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.597 22:19:02 -- common/autotest_common.sh@10 -- # set +x 00:19:06.597 Malloc0 00:19:06.597 22:19:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.597 22:19:02 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:06.597 22:19:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.597 22:19:02 -- common/autotest_common.sh@10 -- # set +x 00:19:06.597 Delay0 00:19:06.597 22:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.597 22:19:03 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:06.597 22:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.597 22:19:03 -- common/autotest_common.sh@10 -- # set +x 00:19:06.597 [2024-11-17 22:19:03.005416] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.597 22:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.597 22:19:03 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:06.597 22:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.597 22:19:03 -- common/autotest_common.sh@10 -- # set +x 00:19:06.597 22:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.597 22:19:03 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:06.597 22:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.597 22:19:03 -- common/autotest_common.sh@10 -- # set +x 00:19:06.597 22:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.597 22:19:03 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.597 22:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.597 22:19:03 -- common/autotest_common.sh@10 -- # set +x 00:19:06.597 [2024-11-17 22:19:03.033673] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.597 22:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.597 22:19:03 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:06.857 22:19:03 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:06.857 22:19:03 -- common/autotest_common.sh@1187 -- # local i=0 00:19:06.857 22:19:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:06.857 22:19:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:06.857 22:19:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:08.763 22:19:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:08.763 22:19:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:08.763 22:19:05 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:08.763 22:19:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:08.763 22:19:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:08.763 22:19:05 -- common/autotest_common.sh@1197 -- # return 0 00:19:08.763 22:19:05 -- target/initiator_timeout.sh@35 -- # fio_pid=81189 00:19:08.763 22:19:05 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:08.763 22:19:05 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:08.763 [global] 00:19:08.763 thread=1 00:19:08.763 invalidate=1 00:19:08.763 rw=write 00:19:08.763 time_based=1 00:19:08.763 runtime=60 00:19:08.763 ioengine=libaio 00:19:08.763 direct=1 00:19:08.763 bs=4096 00:19:08.763 iodepth=1 00:19:08.763 norandommap=0 00:19:08.763 numjobs=1 00:19:08.763 00:19:08.763 verify_dump=1 00:19:08.763 verify_backlog=512 00:19:08.763 verify_state_save=0 00:19:08.763 do_verify=1 00:19:08.763 verify=crc32c-intel 00:19:08.763 [job0] 00:19:08.763 filename=/dev/nvme0n1 00:19:08.763 Could not set queue depth (nvme0n1) 00:19:09.021 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.021 fio-3.35 00:19:09.021 Starting 1 thread 00:19:12.311 22:19:08 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:12.311 22:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.311 22:19:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.311 true 00:19:12.311 22:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.311 22:19:08 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:12.311 22:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.311 22:19:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.311 true 00:19:12.311 22:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.311 22:19:08 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:12.311 22:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.311 22:19:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.311 true 00:19:12.311 22:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.311 22:19:08 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:12.311 22:19:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.311 22:19:08 -- common/autotest_common.sh@10 -- # set +x 00:19:12.311 true 00:19:12.311 22:19:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.311 22:19:08 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:14.846 22:19:11 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:14.846 22:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.846 22:19:11 -- common/autotest_common.sh@10 -- # set +x 00:19:14.846 true 00:19:14.846 22:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.846 22:19:11 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:14.846 22:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.846 22:19:11 -- common/autotest_common.sh@10 -- # set +x 00:19:14.846 true 00:19:14.846 22:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.846 22:19:11 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:14.846 22:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.846 22:19:11 -- common/autotest_common.sh@10 -- # set +x 00:19:14.846 true 00:19:14.846 22:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.846 22:19:11 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:14.846 22:19:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.846 22:19:11 -- common/autotest_common.sh@10 -- # set +x 00:19:14.846 true 00:19:14.846 22:19:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.846 22:19:11 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:14.846 22:19:11 -- target/initiator_timeout.sh@54 -- # wait 81189 00:20:11.130 00:20:11.131 job0: (groupid=0, jobs=1): err= 0: pid=81210: Sun Nov 17 22:20:05 2024 00:20:11.131 read: IOPS=839, BW=3358KiB/s (3439kB/s)(197MiB/60000msec) 00:20:11.131 slat (usec): min=12, max=291, avg=15.07, stdev= 5.07 00:20:11.131 clat (usec): min=76, max=1081, avg=192.85, stdev=19.80 00:20:11.131 lat (usec): min=163, max=1100, avg=207.92, stdev=21.17 00:20:11.131 clat percentiles (usec): 00:20:11.131 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:20:11.131 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:20:11.131 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 225], 00:20:11.131 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 334], 99.95th=[ 400], 00:20:11.131 | 99.99th=[ 578] 00:20:11.131 write: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec); 0 zone resets 00:20:11.131 slat (usec): min=18, max=14888, avg=22.46, stdev=75.31 00:20:11.131 clat (usec): min=3, max=40480k, avg=951.47, stdev=179797.22 00:20:11.131 lat (usec): min=139, max=40480k, avg=973.93, stdev=179797.33 00:20:11.131 clat percentiles (usec): 00:20:11.131 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:20:11.131 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:20:11.131 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 182], 00:20:11.131 | 99.00th=[ 208], 99.50th=[ 223], 99.90th=[ 424], 99.95th=[ 562], 00:20:11.131 | 99.99th=[ 898] 00:20:11.131 bw ( KiB/s): min= 5016, max=12288, per=100.00%, avg=10187.49, stdev=1596.21, samples=39 00:20:11.131 iops : min= 1254, max= 3072, avg=2546.87, stdev=399.05, samples=39 00:20:11.131 lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01% 00:20:11.131 lat (usec) : 250=99.35%, 500=0.60%, 750=0.03%, 1000=0.01% 00:20:11.131 lat (msec) : 2=0.01%, >=2000=0.01% 00:20:11.131 cpu : usr=0.62%, sys=2.27%, ctx=101079, majf=0, minf=5 00:20:11.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:11.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.131 issued rwts: total=50377,50688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:11.131 00:20:11.131 Run status group 0 (all jobs): 00:20:11.131 READ: bw=3358KiB/s (3439kB/s), 3358KiB/s-3358KiB/s (3439kB/s-3439kB/s), io=197MiB (206MB), run=60000-60000msec 00:20:11.131 WRITE: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec 00:20:11.131 00:20:11.131 Disk stats (read/write): 00:20:11.131 nvme0n1: ios=50512/50354, merge=0/0, ticks=10226/8367, in_queue=18593, util=99.76% 00:20:11.131 22:20:05 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:11.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:11.131 22:20:05 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:11.131 22:20:05 -- common/autotest_common.sh@1208 -- # local i=0 00:20:11.131 22:20:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:11.131 22:20:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:11.131 22:20:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:11.131 22:20:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:11.131 22:20:05 -- common/autotest_common.sh@1220 -- # return 0 00:20:11.131 22:20:05 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:11.131 nvmf hotplug test: fio successful as expected 00:20:11.131 22:20:05 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:11.131 22:20:05 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.131 22:20:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.131 22:20:05 -- common/autotest_common.sh@10 -- # set +x 00:20:11.131 22:20:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.131 22:20:05 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:11.131 22:20:05 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:11.131 22:20:05 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:11.131 22:20:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:11.131 22:20:05 -- nvmf/common.sh@116 -- # sync 00:20:11.131 22:20:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:11.131 22:20:05 -- nvmf/common.sh@119 -- # set +e 00:20:11.131 22:20:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:11.131 22:20:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:11.131 rmmod nvme_tcp 00:20:11.131 rmmod nvme_fabrics 00:20:11.131 rmmod nvme_keyring 00:20:11.131 22:20:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:11.131 22:20:05 -- nvmf/common.sh@123 -- # set -e 00:20:11.131 22:20:05 -- nvmf/common.sh@124 -- # return 0 00:20:11.131 22:20:05 -- nvmf/common.sh@477 -- # '[' -n 81101 ']' 00:20:11.131 22:20:05 -- nvmf/common.sh@478 -- # killprocess 81101 00:20:11.131 22:20:05 -- common/autotest_common.sh@936 -- # '[' -z 81101 ']' 00:20:11.131 22:20:05 -- common/autotest_common.sh@940 -- # kill -0 81101 00:20:11.131 22:20:05 -- common/autotest_common.sh@941 -- # uname 00:20:11.131 22:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:11.131 22:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81101 00:20:11.131 22:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:11.131 killing process with pid 81101 00:20:11.131 22:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:11.131 22:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81101' 00:20:11.131 22:20:05 -- common/autotest_common.sh@955 -- # kill 81101 00:20:11.131 22:20:05 -- common/autotest_common.sh@960 -- # wait 81101 00:20:11.131 22:20:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:11.131 22:20:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:11.131 22:20:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:11.131 22:20:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.131 22:20:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:11.131 22:20:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.131 22:20:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.131 22:20:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.131 22:20:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:11.131 00:20:11.131 real 1m4.737s 00:20:11.131 user 4m8.473s 00:20:11.131 sys 0m7.298s 00:20:11.131 22:20:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:11.131 22:20:06 -- common/autotest_common.sh@10 -- # set +x 00:20:11.131 ************************************ 00:20:11.131 END TEST nvmf_initiator_timeout 00:20:11.131 ************************************ 00:20:11.131 22:20:06 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:11.131 22:20:06 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:11.131 22:20:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.131 22:20:06 -- common/autotest_common.sh@10 -- # set +x 00:20:11.131 22:20:06 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:11.131 22:20:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.131 22:20:06 -- common/autotest_common.sh@10 -- # set +x 00:20:11.131 22:20:06 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:11.131 22:20:06 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:11.131 22:20:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:11.131 22:20:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:11.131 22:20:06 -- common/autotest_common.sh@10 -- # set +x 00:20:11.131 ************************************ 00:20:11.131 START TEST nvmf_multicontroller 00:20:11.131 ************************************ 00:20:11.131 22:20:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:11.131 * Looking for test storage... 00:20:11.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:11.131 22:20:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:11.131 22:20:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:11.131 22:20:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:11.131 22:20:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:11.132 22:20:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:11.132 22:20:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:11.132 22:20:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:11.132 22:20:06 -- scripts/common.sh@335 -- # IFS=.-: 00:20:11.132 22:20:06 -- scripts/common.sh@335 -- # read -ra ver1 00:20:11.132 22:20:06 -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.132 22:20:06 -- scripts/common.sh@336 -- # read -ra ver2 00:20:11.132 22:20:06 -- scripts/common.sh@337 -- # local 'op=<' 00:20:11.132 22:20:06 -- scripts/common.sh@339 -- # ver1_l=2 00:20:11.132 22:20:06 -- scripts/common.sh@340 -- # ver2_l=1 00:20:11.132 22:20:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:11.132 22:20:06 -- scripts/common.sh@343 -- # case "$op" in 00:20:11.132 22:20:06 -- scripts/common.sh@344 -- # : 1 00:20:11.132 22:20:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:11.132 22:20:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.132 22:20:06 -- scripts/common.sh@364 -- # decimal 1 00:20:11.132 22:20:06 -- scripts/common.sh@352 -- # local d=1 00:20:11.132 22:20:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.132 22:20:06 -- scripts/common.sh@354 -- # echo 1 00:20:11.132 22:20:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:11.132 22:20:06 -- scripts/common.sh@365 -- # decimal 2 00:20:11.132 22:20:06 -- scripts/common.sh@352 -- # local d=2 00:20:11.132 22:20:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.132 22:20:06 -- scripts/common.sh@354 -- # echo 2 00:20:11.132 22:20:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:11.132 22:20:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:11.132 22:20:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:11.132 22:20:06 -- scripts/common.sh@367 -- # return 0 00:20:11.132 22:20:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.132 22:20:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:11.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.132 --rc genhtml_branch_coverage=1 00:20:11.132 --rc genhtml_function_coverage=1 00:20:11.132 --rc genhtml_legend=1 00:20:11.132 --rc geninfo_all_blocks=1 00:20:11.132 --rc geninfo_unexecuted_blocks=1 00:20:11.132 00:20:11.132 ' 00:20:11.132 22:20:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:11.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.132 --rc genhtml_branch_coverage=1 00:20:11.132 --rc genhtml_function_coverage=1 00:20:11.132 --rc genhtml_legend=1 00:20:11.132 --rc geninfo_all_blocks=1 00:20:11.132 --rc geninfo_unexecuted_blocks=1 00:20:11.132 00:20:11.132 ' 00:20:11.132 22:20:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:11.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.132 --rc genhtml_branch_coverage=1 00:20:11.132 --rc genhtml_function_coverage=1 00:20:11.132 --rc genhtml_legend=1 00:20:11.132 --rc geninfo_all_blocks=1 00:20:11.132 --rc geninfo_unexecuted_blocks=1 00:20:11.132 00:20:11.132 ' 00:20:11.132 22:20:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:11.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.132 --rc genhtml_branch_coverage=1 00:20:11.132 --rc genhtml_function_coverage=1 00:20:11.132 --rc genhtml_legend=1 00:20:11.132 --rc geninfo_all_blocks=1 00:20:11.132 --rc geninfo_unexecuted_blocks=1 00:20:11.132 00:20:11.132 ' 00:20:11.132 22:20:06 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:11.132 22:20:06 -- nvmf/common.sh@7 -- # uname -s 00:20:11.132 22:20:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.132 22:20:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.132 22:20:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.132 22:20:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.132 22:20:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.132 22:20:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.132 22:20:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.132 22:20:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.132 22:20:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.132 22:20:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.132 22:20:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:20:11.132 22:20:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:20:11.132 22:20:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.132 22:20:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.132 22:20:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:11.132 22:20:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:11.132 22:20:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.132 22:20:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.132 22:20:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.132 22:20:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.132 22:20:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.132 22:20:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.132 22:20:06 -- paths/export.sh@5 -- # export PATH 00:20:11.132 22:20:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.132 22:20:06 -- nvmf/common.sh@46 -- # : 0 00:20:11.132 22:20:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:11.132 22:20:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:11.132 22:20:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:11.132 22:20:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.132 22:20:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.132 22:20:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:11.132 22:20:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:11.132 22:20:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:11.132 22:20:06 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:11.132 22:20:06 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:11.132 22:20:06 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:11.132 22:20:06 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:11.132 22:20:06 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.132 22:20:06 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:11.132 22:20:06 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:11.132 22:20:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:11.132 22:20:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.132 22:20:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:11.132 22:20:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:11.132 22:20:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:11.132 22:20:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.132 22:20:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.132 22:20:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.132 22:20:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:11.132 22:20:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:11.132 22:20:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:11.132 22:20:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:11.132 22:20:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:11.132 22:20:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:11.132 22:20:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.132 22:20:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.132 22:20:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:11.132 22:20:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:11.132 22:20:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:11.132 22:20:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:11.133 22:20:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:11.133 22:20:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.133 22:20:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:11.133 22:20:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:11.133 22:20:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:11.133 22:20:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:11.133 22:20:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:11.133 22:20:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:11.133 Cannot find device "nvmf_tgt_br" 00:20:11.133 22:20:06 -- nvmf/common.sh@154 -- # true 00:20:11.133 22:20:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:11.133 Cannot find device "nvmf_tgt_br2" 00:20:11.133 22:20:06 -- nvmf/common.sh@155 -- # true 00:20:11.133 22:20:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:11.133 22:20:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:11.133 Cannot find device "nvmf_tgt_br" 00:20:11.133 22:20:06 -- nvmf/common.sh@157 -- # true 00:20:11.133 22:20:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:11.133 Cannot find device "nvmf_tgt_br2" 00:20:11.133 22:20:06 -- nvmf/common.sh@158 -- # true 00:20:11.133 22:20:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:11.133 22:20:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:11.133 22:20:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:11.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.133 22:20:06 -- nvmf/common.sh@161 -- # true 00:20:11.133 22:20:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:11.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.133 22:20:06 -- nvmf/common.sh@162 -- # true 00:20:11.133 22:20:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:11.133 22:20:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:11.133 22:20:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:11.133 22:20:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:11.133 22:20:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:11.133 22:20:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:11.133 22:20:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:11.133 22:20:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:11.133 22:20:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:11.133 22:20:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:11.133 22:20:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:11.133 22:20:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:11.133 22:20:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:11.133 22:20:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:11.133 22:20:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:11.133 22:20:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:11.133 22:20:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:11.133 22:20:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:11.133 22:20:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:11.133 22:20:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:11.133 22:20:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:11.133 22:20:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:11.133 22:20:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:11.133 22:20:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:11.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:20:11.133 00:20:11.133 --- 10.0.0.2 ping statistics --- 00:20:11.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.133 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:11.133 22:20:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:11.133 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:11.133 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:20:11.133 00:20:11.133 --- 10.0.0.3 ping statistics --- 00:20:11.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.133 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:11.133 22:20:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:11.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:20:11.133 00:20:11.133 --- 10.0.0.1 ping statistics --- 00:20:11.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.133 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:11.133 22:20:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.133 22:20:06 -- nvmf/common.sh@421 -- # return 0 00:20:11.133 22:20:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:11.133 22:20:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.133 22:20:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:11.133 22:20:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:11.133 22:20:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.133 22:20:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:11.133 22:20:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:11.133 22:20:06 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:11.133 22:20:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:11.133 22:20:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.133 22:20:06 -- common/autotest_common.sh@10 -- # set +x 00:20:11.133 22:20:06 -- nvmf/common.sh@469 -- # nvmfpid=82045 00:20:11.133 22:20:06 -- nvmf/common.sh@470 -- # waitforlisten 82045 00:20:11.133 22:20:06 -- common/autotest_common.sh@829 -- # '[' -z 82045 ']' 00:20:11.133 22:20:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.133 22:20:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:11.133 22:20:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.133 22:20:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.133 22:20:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.133 22:20:06 -- common/autotest_common.sh@10 -- # set +x 00:20:11.133 [2024-11-17 22:20:06.787054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:11.133 [2024-11-17 22:20:06.787126] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.133 [2024-11-17 22:20:06.918595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:11.133 [2024-11-17 22:20:06.992202] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:11.133 [2024-11-17 22:20:06.992358] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.133 [2024-11-17 22:20:06.992372] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.133 [2024-11-17 22:20:06.992381] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.133 [2024-11-17 22:20:06.992545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.133 [2024-11-17 22:20:06.993055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.133 [2024-11-17 22:20:06.993062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.393 22:20:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.393 22:20:07 -- common/autotest_common.sh@862 -- # return 0 00:20:11.393 22:20:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:11.393 22:20:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 22:20:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.393 22:20:07 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 [2024-11-17 22:20:07.872285] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.393 22:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:07 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 Malloc0 00:20:11.393 22:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:07 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 22:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:07 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 22:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:07 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 [2024-11-17 22:20:07.941222] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.393 22:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:07 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 [2024-11-17 22:20:07.949179] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:11.393 22:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:07 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 Malloc1 00:20:11.393 22:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:07 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 22:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:07 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 22:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:07 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:11.393 22:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.393 22:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.393 22:20:08 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:11.393 22:20:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.393 22:20:08 -- common/autotest_common.sh@10 -- # set +x 00:20:11.652 22:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.652 22:20:08 -- host/multicontroller.sh@44 -- # bdevperf_pid=82097 00:20:11.652 22:20:08 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:11.652 22:20:08 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:11.652 22:20:08 -- host/multicontroller.sh@47 -- # waitforlisten 82097 /var/tmp/bdevperf.sock 00:20:11.652 22:20:08 -- common/autotest_common.sh@829 -- # '[' -z 82097 ']' 00:20:11.652 22:20:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.652 22:20:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.652 22:20:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.652 22:20:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.652 22:20:08 -- common/autotest_common.sh@10 -- # set +x 00:20:12.590 22:20:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.590 22:20:09 -- common/autotest_common.sh@862 -- # return 0 00:20:12.590 22:20:09 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:12.590 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.590 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.590 NVMe0n1 00:20:12.590 22:20:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.590 22:20:09 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:12.590 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.590 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.590 22:20:09 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:12.590 22:20:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.590 1 00:20:12.590 22:20:09 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:12.590 22:20:09 -- common/autotest_common.sh@650 -- # local es=0 00:20:12.590 22:20:09 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:12.590 22:20:09 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:12.590 22:20:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.590 22:20:09 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:12.590 22:20:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.590 22:20:09 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:12.590 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.590 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.590 2024/11/17 22:20:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:12.590 request: 00:20:12.590 { 00:20:12.590 "method": "bdev_nvme_attach_controller", 00:20:12.590 "params": { 00:20:12.590 "name": "NVMe0", 00:20:12.590 "trtype": "tcp", 00:20:12.850 "traddr": "10.0.0.2", 00:20:12.850 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:12.850 "hostaddr": "10.0.0.2", 00:20:12.850 "hostsvcid": "60000", 00:20:12.850 "adrfam": "ipv4", 00:20:12.850 "trsvcid": "4420", 00:20:12.850 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:12.850 } 00:20:12.850 } 00:20:12.850 Got JSON-RPC error response 00:20:12.850 GoRPCClient: error on JSON-RPC call 00:20:12.850 22:20:09 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:12.850 22:20:09 -- common/autotest_common.sh@653 -- # es=1 00:20:12.850 22:20:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:12.850 22:20:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:12.850 22:20:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:12.850 22:20:09 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:12.850 22:20:09 -- common/autotest_common.sh@650 -- # local es=0 00:20:12.850 22:20:09 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:12.850 22:20:09 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:12.850 22:20:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.850 22:20:09 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:12.850 22:20:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.850 22:20:09 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:12.850 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.850 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.850 2024/11/17 22:20:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:12.850 request: 00:20:12.850 { 00:20:12.850 "method": "bdev_nvme_attach_controller", 00:20:12.850 "params": { 00:20:12.850 "name": "NVMe0", 00:20:12.850 "trtype": "tcp", 00:20:12.850 "traddr": "10.0.0.2", 00:20:12.850 "hostaddr": "10.0.0.2", 00:20:12.850 "hostsvcid": "60000", 00:20:12.850 "adrfam": "ipv4", 00:20:12.850 "trsvcid": "4420", 00:20:12.850 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:12.850 } 00:20:12.850 } 00:20:12.850 Got JSON-RPC error response 00:20:12.850 GoRPCClient: error on JSON-RPC call 00:20:12.850 22:20:09 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:12.850 22:20:09 -- common/autotest_common.sh@653 -- # es=1 00:20:12.850 22:20:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:12.850 22:20:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:12.850 22:20:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:12.850 22:20:09 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:12.850 22:20:09 -- common/autotest_common.sh@650 -- # local es=0 00:20:12.850 22:20:09 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:12.850 22:20:09 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:12.850 22:20:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.850 22:20:09 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:12.850 22:20:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.850 22:20:09 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:12.850 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.850 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.850 2024/11/17 22:20:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:12.850 request: 00:20:12.850 { 00:20:12.850 "method": "bdev_nvme_attach_controller", 00:20:12.850 "params": { 00:20:12.850 "name": "NVMe0", 00:20:12.850 "trtype": "tcp", 00:20:12.850 "traddr": "10.0.0.2", 00:20:12.850 "hostaddr": "10.0.0.2", 00:20:12.850 "hostsvcid": "60000", 00:20:12.850 "adrfam": "ipv4", 00:20:12.850 "trsvcid": "4420", 00:20:12.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.851 "multipath": "disable" 00:20:12.851 } 00:20:12.851 } 00:20:12.851 Got JSON-RPC error response 00:20:12.851 GoRPCClient: error on JSON-RPC call 00:20:12.851 22:20:09 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:12.851 22:20:09 -- common/autotest_common.sh@653 -- # es=1 00:20:12.851 22:20:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:12.851 22:20:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:12.851 22:20:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:12.851 22:20:09 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:12.851 22:20:09 -- common/autotest_common.sh@650 -- # local es=0 00:20:12.851 22:20:09 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:12.851 22:20:09 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:12.851 22:20:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.851 22:20:09 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:12.851 22:20:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.851 22:20:09 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:12.851 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.851 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.851 2024/11/17 22:20:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:12.851 request: 00:20:12.851 { 00:20:12.851 "method": "bdev_nvme_attach_controller", 00:20:12.851 "params": { 00:20:12.851 "name": "NVMe0", 00:20:12.851 "trtype": "tcp", 00:20:12.851 "traddr": "10.0.0.2", 00:20:12.851 "hostaddr": "10.0.0.2", 00:20:12.851 "hostsvcid": "60000", 00:20:12.851 "adrfam": "ipv4", 00:20:12.851 "trsvcid": "4420", 00:20:12.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.851 "multipath": "failover" 00:20:12.851 } 00:20:12.851 } 00:20:12.851 Got JSON-RPC error response 00:20:12.851 GoRPCClient: error on JSON-RPC call 00:20:12.851 22:20:09 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:12.851 22:20:09 -- common/autotest_common.sh@653 -- # es=1 00:20:12.851 22:20:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:12.851 22:20:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:12.851 22:20:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:12.851 22:20:09 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:12.851 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.851 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.851 00:20:12.851 22:20:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.851 22:20:09 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:12.851 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.851 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.851 22:20:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.851 22:20:09 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:12.851 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.851 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.851 00:20:12.851 22:20:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.851 22:20:09 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:12.851 22:20:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.851 22:20:09 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:12.851 22:20:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.851 22:20:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.851 22:20:09 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:12.851 22:20:09 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.228 0 00:20:14.228 22:20:10 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:14.228 22:20:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.228 22:20:10 -- common/autotest_common.sh@10 -- # set +x 00:20:14.228 22:20:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.228 22:20:10 -- host/multicontroller.sh@100 -- # killprocess 82097 00:20:14.228 22:20:10 -- common/autotest_common.sh@936 -- # '[' -z 82097 ']' 00:20:14.228 22:20:10 -- common/autotest_common.sh@940 -- # kill -0 82097 00:20:14.228 22:20:10 -- common/autotest_common.sh@941 -- # uname 00:20:14.228 22:20:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:14.228 22:20:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82097 00:20:14.228 killing process with pid 82097 00:20:14.228 22:20:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:14.228 22:20:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:14.228 22:20:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82097' 00:20:14.228 22:20:10 -- common/autotest_common.sh@955 -- # kill 82097 00:20:14.228 22:20:10 -- common/autotest_common.sh@960 -- # wait 82097 00:20:14.487 22:20:10 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:14.487 22:20:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.487 22:20:10 -- common/autotest_common.sh@10 -- # set +x 00:20:14.487 22:20:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.487 22:20:10 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:14.487 22:20:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.487 22:20:10 -- common/autotest_common.sh@10 -- # set +x 00:20:14.487 22:20:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.487 22:20:10 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:14.487 22:20:10 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:14.487 22:20:10 -- common/autotest_common.sh@1607 -- # read -r file 00:20:14.487 22:20:10 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:14.487 22:20:10 -- common/autotest_common.sh@1606 -- # sort -u 00:20:14.487 22:20:10 -- common/autotest_common.sh@1608 -- # cat 00:20:14.487 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:14.487 [2024-11-17 22:20:08.071976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:14.487 [2024-11-17 22:20:08.072080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82097 ] 00:20:14.487 [2024-11-17 22:20:08.212339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.487 [2024-11-17 22:20:08.325254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.487 [2024-11-17 22:20:09.416876] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 0a833242-1f44-4526-ac07-8ee65ac8e50b already exists 00:20:14.487 [2024-11-17 22:20:09.416937] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:0a833242-1f44-4526-ac07-8ee65ac8e50b alias for bdev NVMe1n1 00:20:14.487 [2024-11-17 22:20:09.416955] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:14.487 Running I/O for 1 seconds... 00:20:14.487 00:20:14.487 Latency(us) 00:20:14.487 [2024-11-17T22:20:11.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.487 [2024-11-17T22:20:11.102Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:14.487 NVMe0n1 : 1.01 22900.60 89.46 0.00 0.00 5576.15 1884.16 13166.78 00:20:14.487 [2024-11-17T22:20:11.102Z] =================================================================================================================== 00:20:14.487 [2024-11-17T22:20:11.102Z] Total : 22900.60 89.46 0.00 0.00 5576.15 1884.16 13166.78 00:20:14.487 Received shutdown signal, test time was about 1.000000 seconds 00:20:14.487 00:20:14.487 Latency(us) 00:20:14.487 [2024-11-17T22:20:11.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.487 [2024-11-17T22:20:11.102Z] =================================================================================================================== 00:20:14.487 [2024-11-17T22:20:11.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.487 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:14.487 22:20:10 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:14.487 22:20:10 -- common/autotest_common.sh@1607 -- # read -r file 00:20:14.487 22:20:10 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:14.487 22:20:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:14.487 22:20:10 -- nvmf/common.sh@116 -- # sync 00:20:14.487 22:20:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:14.487 22:20:11 -- nvmf/common.sh@119 -- # set +e 00:20:14.487 22:20:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:14.487 22:20:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:14.487 rmmod nvme_tcp 00:20:14.487 rmmod nvme_fabrics 00:20:14.487 rmmod nvme_keyring 00:20:14.487 22:20:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:14.487 22:20:11 -- nvmf/common.sh@123 -- # set -e 00:20:14.487 22:20:11 -- nvmf/common.sh@124 -- # return 0 00:20:14.487 22:20:11 -- nvmf/common.sh@477 -- # '[' -n 82045 ']' 00:20:14.487 22:20:11 -- nvmf/common.sh@478 -- # killprocess 82045 00:20:14.487 22:20:11 -- common/autotest_common.sh@936 -- # '[' -z 82045 ']' 00:20:14.487 22:20:11 -- common/autotest_common.sh@940 -- # kill -0 82045 00:20:14.487 22:20:11 -- common/autotest_common.sh@941 -- # uname 00:20:14.487 22:20:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:14.745 22:20:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82045 00:20:14.745 22:20:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:14.745 22:20:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:14.745 22:20:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82045' 00:20:14.745 killing process with pid 82045 00:20:14.745 22:20:11 -- common/autotest_common.sh@955 -- # kill 82045 00:20:14.745 22:20:11 -- common/autotest_common.sh@960 -- # wait 82045 00:20:15.004 22:20:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:15.004 22:20:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:15.004 22:20:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:15.004 22:20:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.004 22:20:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:15.004 22:20:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.004 22:20:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.004 22:20:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.004 22:20:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:15.004 00:20:15.004 real 0m5.274s 00:20:15.004 user 0m16.575s 00:20:15.004 sys 0m1.184s 00:20:15.004 22:20:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:15.004 22:20:11 -- common/autotest_common.sh@10 -- # set +x 00:20:15.004 ************************************ 00:20:15.004 END TEST nvmf_multicontroller 00:20:15.004 ************************************ 00:20:15.004 22:20:11 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:15.004 22:20:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:15.004 22:20:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.004 22:20:11 -- common/autotest_common.sh@10 -- # set +x 00:20:15.004 ************************************ 00:20:15.004 START TEST nvmf_aer 00:20:15.004 ************************************ 00:20:15.004 22:20:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:15.004 * Looking for test storage... 00:20:15.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.004 22:20:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:15.004 22:20:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:15.004 22:20:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:15.263 22:20:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:15.263 22:20:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:15.263 22:20:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:15.263 22:20:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:15.263 22:20:11 -- scripts/common.sh@335 -- # IFS=.-: 00:20:15.263 22:20:11 -- scripts/common.sh@335 -- # read -ra ver1 00:20:15.263 22:20:11 -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.263 22:20:11 -- scripts/common.sh@336 -- # read -ra ver2 00:20:15.263 22:20:11 -- scripts/common.sh@337 -- # local 'op=<' 00:20:15.263 22:20:11 -- scripts/common.sh@339 -- # ver1_l=2 00:20:15.263 22:20:11 -- scripts/common.sh@340 -- # ver2_l=1 00:20:15.263 22:20:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:15.263 22:20:11 -- scripts/common.sh@343 -- # case "$op" in 00:20:15.263 22:20:11 -- scripts/common.sh@344 -- # : 1 00:20:15.263 22:20:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:15.263 22:20:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.263 22:20:11 -- scripts/common.sh@364 -- # decimal 1 00:20:15.263 22:20:11 -- scripts/common.sh@352 -- # local d=1 00:20:15.263 22:20:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.263 22:20:11 -- scripts/common.sh@354 -- # echo 1 00:20:15.263 22:20:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:15.263 22:20:11 -- scripts/common.sh@365 -- # decimal 2 00:20:15.263 22:20:11 -- scripts/common.sh@352 -- # local d=2 00:20:15.263 22:20:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.263 22:20:11 -- scripts/common.sh@354 -- # echo 2 00:20:15.263 22:20:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:15.263 22:20:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:15.263 22:20:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:15.263 22:20:11 -- scripts/common.sh@367 -- # return 0 00:20:15.263 22:20:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.263 22:20:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.263 --rc genhtml_branch_coverage=1 00:20:15.263 --rc genhtml_function_coverage=1 00:20:15.263 --rc genhtml_legend=1 00:20:15.263 --rc geninfo_all_blocks=1 00:20:15.263 --rc geninfo_unexecuted_blocks=1 00:20:15.263 00:20:15.263 ' 00:20:15.263 22:20:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.263 --rc genhtml_branch_coverage=1 00:20:15.263 --rc genhtml_function_coverage=1 00:20:15.263 --rc genhtml_legend=1 00:20:15.263 --rc geninfo_all_blocks=1 00:20:15.263 --rc geninfo_unexecuted_blocks=1 00:20:15.263 00:20:15.263 ' 00:20:15.263 22:20:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.263 --rc genhtml_branch_coverage=1 00:20:15.263 --rc genhtml_function_coverage=1 00:20:15.263 --rc genhtml_legend=1 00:20:15.263 --rc geninfo_all_blocks=1 00:20:15.263 --rc geninfo_unexecuted_blocks=1 00:20:15.263 00:20:15.263 ' 00:20:15.263 22:20:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.263 --rc genhtml_branch_coverage=1 00:20:15.263 --rc genhtml_function_coverage=1 00:20:15.263 --rc genhtml_legend=1 00:20:15.263 --rc geninfo_all_blocks=1 00:20:15.263 --rc geninfo_unexecuted_blocks=1 00:20:15.263 00:20:15.263 ' 00:20:15.263 22:20:11 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.263 22:20:11 -- nvmf/common.sh@7 -- # uname -s 00:20:15.263 22:20:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.263 22:20:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.263 22:20:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.263 22:20:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.263 22:20:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.263 22:20:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.263 22:20:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.263 22:20:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.263 22:20:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.263 22:20:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.263 22:20:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:20:15.263 22:20:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:20:15.263 22:20:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.263 22:20:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.263 22:20:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.263 22:20:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.263 22:20:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.263 22:20:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.263 22:20:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.263 22:20:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.263 22:20:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.263 22:20:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.263 22:20:11 -- paths/export.sh@5 -- # export PATH 00:20:15.263 22:20:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.263 22:20:11 -- nvmf/common.sh@46 -- # : 0 00:20:15.263 22:20:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:15.263 22:20:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:15.263 22:20:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:15.263 22:20:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.263 22:20:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.263 22:20:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:15.263 22:20:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:15.263 22:20:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:15.263 22:20:11 -- host/aer.sh@11 -- # nvmftestinit 00:20:15.263 22:20:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:15.263 22:20:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.264 22:20:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:15.264 22:20:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:15.264 22:20:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:15.264 22:20:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.264 22:20:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.264 22:20:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.264 22:20:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:15.264 22:20:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:15.264 22:20:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:15.264 22:20:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:15.264 22:20:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:15.264 22:20:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:15.264 22:20:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.264 22:20:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.264 22:20:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:15.264 22:20:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:15.264 22:20:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.264 22:20:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.264 22:20:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.264 22:20:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.264 22:20:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.264 22:20:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.264 22:20:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.264 22:20:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.264 22:20:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:15.264 22:20:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:15.264 Cannot find device "nvmf_tgt_br" 00:20:15.264 22:20:11 -- nvmf/common.sh@154 -- # true 00:20:15.264 22:20:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.264 Cannot find device "nvmf_tgt_br2" 00:20:15.264 22:20:11 -- nvmf/common.sh@155 -- # true 00:20:15.264 22:20:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:15.264 22:20:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:15.264 Cannot find device "nvmf_tgt_br" 00:20:15.264 22:20:11 -- nvmf/common.sh@157 -- # true 00:20:15.264 22:20:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:15.264 Cannot find device "nvmf_tgt_br2" 00:20:15.264 22:20:11 -- nvmf/common.sh@158 -- # true 00:20:15.264 22:20:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:15.264 22:20:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:15.264 22:20:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.264 22:20:11 -- nvmf/common.sh@161 -- # true 00:20:15.264 22:20:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.264 22:20:11 -- nvmf/common.sh@162 -- # true 00:20:15.264 22:20:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.264 22:20:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:15.264 22:20:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:15.264 22:20:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:15.264 22:20:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:15.264 22:20:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:15.264 22:20:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:15.523 22:20:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:15.523 22:20:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:15.523 22:20:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:15.523 22:20:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:15.523 22:20:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:15.523 22:20:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:15.523 22:20:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:15.523 22:20:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:15.523 22:20:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:15.523 22:20:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:15.523 22:20:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:15.523 22:20:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:15.523 22:20:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:15.523 22:20:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:15.523 22:20:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:15.523 22:20:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:15.523 22:20:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:15.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:20:15.523 00:20:15.523 --- 10.0.0.2 ping statistics --- 00:20:15.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.523 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:20:15.523 22:20:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:15.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:15.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:15.523 00:20:15.523 --- 10.0.0.3 ping statistics --- 00:20:15.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.523 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:15.523 22:20:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:15.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:15.523 00:20:15.523 --- 10.0.0.1 ping statistics --- 00:20:15.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.523 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:15.523 22:20:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.523 22:20:12 -- nvmf/common.sh@421 -- # return 0 00:20:15.523 22:20:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:15.523 22:20:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.523 22:20:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:15.523 22:20:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:15.523 22:20:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.523 22:20:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:15.523 22:20:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:15.523 22:20:12 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:15.523 22:20:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:15.523 22:20:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:15.523 22:20:12 -- common/autotest_common.sh@10 -- # set +x 00:20:15.523 22:20:12 -- nvmf/common.sh@469 -- # nvmfpid=82352 00:20:15.523 22:20:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:15.523 22:20:12 -- nvmf/common.sh@470 -- # waitforlisten 82352 00:20:15.523 22:20:12 -- common/autotest_common.sh@829 -- # '[' -z 82352 ']' 00:20:15.523 22:20:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.523 22:20:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.523 22:20:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.523 22:20:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.523 22:20:12 -- common/autotest_common.sh@10 -- # set +x 00:20:15.523 [2024-11-17 22:20:12.094197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:15.523 [2024-11-17 22:20:12.094278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.782 [2024-11-17 22:20:12.236774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:15.782 [2024-11-17 22:20:12.359259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:15.782 [2024-11-17 22:20:12.359804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.782 [2024-11-17 22:20:12.359991] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.782 [2024-11-17 22:20:12.360187] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.782 [2024-11-17 22:20:12.360494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.782 [2024-11-17 22:20:12.360631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.782 [2024-11-17 22:20:12.360836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.782 [2024-11-17 22:20:12.360981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.719 22:20:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.719 22:20:13 -- common/autotest_common.sh@862 -- # return 0 00:20:16.719 22:20:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:16.719 22:20:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.719 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.719 22:20:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.719 22:20:13 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:16.719 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.719 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.719 [2024-11-17 22:20:13.109561] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.719 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.719 22:20:13 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:16.719 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.719 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.719 Malloc0 00:20:16.719 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.719 22:20:13 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:16.719 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.719 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.719 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.719 22:20:13 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:16.719 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.719 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.719 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.719 22:20:13 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.719 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.719 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.719 [2024-11-17 22:20:13.188262] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.719 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.719 22:20:13 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:16.719 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.719 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.719 [2024-11-17 22:20:13.195932] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:16.719 [ 00:20:16.719 { 00:20:16.719 "allow_any_host": true, 00:20:16.719 "hosts": [], 00:20:16.719 "listen_addresses": [], 00:20:16.719 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:16.719 "subtype": "Discovery" 00:20:16.719 }, 00:20:16.719 { 00:20:16.719 "allow_any_host": true, 00:20:16.719 "hosts": [], 00:20:16.719 "listen_addresses": [ 00:20:16.719 { 00:20:16.719 "adrfam": "IPv4", 00:20:16.719 "traddr": "10.0.0.2", 00:20:16.719 "transport": "TCP", 00:20:16.719 "trsvcid": "4420", 00:20:16.719 "trtype": "TCP" 00:20:16.719 } 00:20:16.719 ], 00:20:16.719 "max_cntlid": 65519, 00:20:16.719 "max_namespaces": 2, 00:20:16.719 "min_cntlid": 1, 00:20:16.719 "model_number": "SPDK bdev Controller", 00:20:16.719 "namespaces": [ 00:20:16.719 { 00:20:16.719 "bdev_name": "Malloc0", 00:20:16.719 "name": "Malloc0", 00:20:16.719 "nguid": "552AFE2BF75A455899AB97D7F3DFC408", 00:20:16.719 "nsid": 1, 00:20:16.719 "uuid": "552afe2b-f75a-4558-99ab-97d7f3dfc408" 00:20:16.719 } 00:20:16.719 ], 00:20:16.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.719 "serial_number": "SPDK00000000000001", 00:20:16.719 "subtype": "NVMe" 00:20:16.719 } 00:20:16.719 ] 00:20:16.719 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.719 22:20:13 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:16.719 22:20:13 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:16.719 22:20:13 -- host/aer.sh@33 -- # aerpid=82406 00:20:16.719 22:20:13 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:16.719 22:20:13 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:16.719 22:20:13 -- common/autotest_common.sh@1254 -- # local i=0 00:20:16.719 22:20:13 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:16.719 22:20:13 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:16.719 22:20:13 -- common/autotest_common.sh@1257 -- # i=1 00:20:16.719 22:20:13 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:16.719 22:20:13 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:16.719 22:20:13 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:16.719 22:20:13 -- common/autotest_common.sh@1257 -- # i=2 00:20:16.719 22:20:13 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:16.978 22:20:13 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:16.978 22:20:13 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:16.978 22:20:13 -- common/autotest_common.sh@1265 -- # return 0 00:20:16.978 22:20:13 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:16.978 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.978 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.978 Malloc1 00:20:16.978 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.979 22:20:13 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:16.979 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.979 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.979 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.979 22:20:13 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:16.979 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.979 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.979 Asynchronous Event Request test 00:20:16.979 Attaching to 10.0.0.2 00:20:16.979 Attached to 10.0.0.2 00:20:16.979 Registering asynchronous event callbacks... 00:20:16.979 Starting namespace attribute notice tests for all controllers... 00:20:16.979 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:16.979 aer_cb - Changed Namespace 00:20:16.979 Cleaning up... 00:20:16.979 [ 00:20:16.979 { 00:20:16.979 "allow_any_host": true, 00:20:16.979 "hosts": [], 00:20:16.979 "listen_addresses": [], 00:20:16.979 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:16.979 "subtype": "Discovery" 00:20:16.979 }, 00:20:16.979 { 00:20:16.979 "allow_any_host": true, 00:20:16.979 "hosts": [], 00:20:16.979 "listen_addresses": [ 00:20:16.979 { 00:20:16.979 "adrfam": "IPv4", 00:20:16.979 "traddr": "10.0.0.2", 00:20:16.979 "transport": "TCP", 00:20:16.979 "trsvcid": "4420", 00:20:16.979 "trtype": "TCP" 00:20:16.979 } 00:20:16.979 ], 00:20:16.979 "max_cntlid": 65519, 00:20:16.979 "max_namespaces": 2, 00:20:16.979 "min_cntlid": 1, 00:20:16.979 "model_number": "SPDK bdev Controller", 00:20:16.979 "namespaces": [ 00:20:16.979 { 00:20:16.979 "bdev_name": "Malloc0", 00:20:16.979 "name": "Malloc0", 00:20:16.979 "nguid": "552AFE2BF75A455899AB97D7F3DFC408", 00:20:16.979 "nsid": 1, 00:20:16.979 "uuid": "552afe2b-f75a-4558-99ab-97d7f3dfc408" 00:20:16.979 }, 00:20:16.979 { 00:20:16.979 "bdev_name": "Malloc1", 00:20:16.979 "name": "Malloc1", 00:20:16.979 "nguid": "F6A3E08AF2244C1689F9E4BF827501DD", 00:20:16.979 "nsid": 2, 00:20:16.979 "uuid": "f6a3e08a-f224-4c16-89f9-e4bf827501dd" 00:20:16.979 } 00:20:16.979 ], 00:20:16.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.979 "serial_number": "SPDK00000000000001", 00:20:16.979 "subtype": "NVMe" 00:20:16.979 } 00:20:16.979 ] 00:20:16.979 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.979 22:20:13 -- host/aer.sh@43 -- # wait 82406 00:20:16.979 22:20:13 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:16.979 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.979 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:16.979 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.979 22:20:13 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:16.979 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.979 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:17.238 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.238 22:20:13 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.238 22:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.238 22:20:13 -- common/autotest_common.sh@10 -- # set +x 00:20:17.238 22:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.238 22:20:13 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:17.238 22:20:13 -- host/aer.sh@51 -- # nvmftestfini 00:20:17.238 22:20:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:17.238 22:20:13 -- nvmf/common.sh@116 -- # sync 00:20:17.238 22:20:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:17.238 22:20:13 -- nvmf/common.sh@119 -- # set +e 00:20:17.238 22:20:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:17.238 22:20:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:17.238 rmmod nvme_tcp 00:20:17.238 rmmod nvme_fabrics 00:20:17.238 rmmod nvme_keyring 00:20:17.238 22:20:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:17.238 22:20:13 -- nvmf/common.sh@123 -- # set -e 00:20:17.238 22:20:13 -- nvmf/common.sh@124 -- # return 0 00:20:17.238 22:20:13 -- nvmf/common.sh@477 -- # '[' -n 82352 ']' 00:20:17.238 22:20:13 -- nvmf/common.sh@478 -- # killprocess 82352 00:20:17.238 22:20:13 -- common/autotest_common.sh@936 -- # '[' -z 82352 ']' 00:20:17.238 22:20:13 -- common/autotest_common.sh@940 -- # kill -0 82352 00:20:17.238 22:20:13 -- common/autotest_common.sh@941 -- # uname 00:20:17.238 22:20:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.238 22:20:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82352 00:20:17.238 22:20:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:17.238 22:20:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:17.238 22:20:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82352' 00:20:17.238 killing process with pid 82352 00:20:17.238 22:20:13 -- common/autotest_common.sh@955 -- # kill 82352 00:20:17.238 [2024-11-17 22:20:13.760658] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:17.238 22:20:13 -- common/autotest_common.sh@960 -- # wait 82352 00:20:17.497 22:20:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:17.497 22:20:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:17.497 22:20:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:17.497 22:20:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.497 22:20:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:17.497 22:20:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.497 22:20:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.497 22:20:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.497 22:20:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:17.756 00:20:17.756 real 0m2.606s 00:20:17.756 user 0m6.969s 00:20:17.756 sys 0m0.731s 00:20:17.756 22:20:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:17.756 22:20:14 -- common/autotest_common.sh@10 -- # set +x 00:20:17.756 ************************************ 00:20:17.756 END TEST nvmf_aer 00:20:17.756 ************************************ 00:20:17.756 22:20:14 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:17.756 22:20:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:17.756 22:20:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:17.756 22:20:14 -- common/autotest_common.sh@10 -- # set +x 00:20:17.756 ************************************ 00:20:17.756 START TEST nvmf_async_init 00:20:17.756 ************************************ 00:20:17.756 22:20:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:17.756 * Looking for test storage... 00:20:17.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:17.756 22:20:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:17.756 22:20:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:17.756 22:20:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:17.756 22:20:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:17.756 22:20:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:17.756 22:20:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:17.756 22:20:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:17.756 22:20:14 -- scripts/common.sh@335 -- # IFS=.-: 00:20:17.756 22:20:14 -- scripts/common.sh@335 -- # read -ra ver1 00:20:17.756 22:20:14 -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.756 22:20:14 -- scripts/common.sh@336 -- # read -ra ver2 00:20:17.756 22:20:14 -- scripts/common.sh@337 -- # local 'op=<' 00:20:17.756 22:20:14 -- scripts/common.sh@339 -- # ver1_l=2 00:20:17.756 22:20:14 -- scripts/common.sh@340 -- # ver2_l=1 00:20:17.757 22:20:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:17.757 22:20:14 -- scripts/common.sh@343 -- # case "$op" in 00:20:17.757 22:20:14 -- scripts/common.sh@344 -- # : 1 00:20:17.757 22:20:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:17.757 22:20:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.757 22:20:14 -- scripts/common.sh@364 -- # decimal 1 00:20:17.757 22:20:14 -- scripts/common.sh@352 -- # local d=1 00:20:17.757 22:20:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.757 22:20:14 -- scripts/common.sh@354 -- # echo 1 00:20:17.757 22:20:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:17.757 22:20:14 -- scripts/common.sh@365 -- # decimal 2 00:20:17.757 22:20:14 -- scripts/common.sh@352 -- # local d=2 00:20:17.757 22:20:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.757 22:20:14 -- scripts/common.sh@354 -- # echo 2 00:20:17.757 22:20:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:17.757 22:20:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:17.757 22:20:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:17.757 22:20:14 -- scripts/common.sh@367 -- # return 0 00:20:17.757 22:20:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.757 22:20:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:17.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.757 --rc genhtml_branch_coverage=1 00:20:17.757 --rc genhtml_function_coverage=1 00:20:17.757 --rc genhtml_legend=1 00:20:17.757 --rc geninfo_all_blocks=1 00:20:17.757 --rc geninfo_unexecuted_blocks=1 00:20:17.757 00:20:17.757 ' 00:20:17.757 22:20:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:17.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.757 --rc genhtml_branch_coverage=1 00:20:17.757 --rc genhtml_function_coverage=1 00:20:17.757 --rc genhtml_legend=1 00:20:17.757 --rc geninfo_all_blocks=1 00:20:17.757 --rc geninfo_unexecuted_blocks=1 00:20:17.757 00:20:17.757 ' 00:20:17.757 22:20:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:17.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.757 --rc genhtml_branch_coverage=1 00:20:17.757 --rc genhtml_function_coverage=1 00:20:17.757 --rc genhtml_legend=1 00:20:17.757 --rc geninfo_all_blocks=1 00:20:17.757 --rc geninfo_unexecuted_blocks=1 00:20:17.757 00:20:17.757 ' 00:20:17.757 22:20:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:17.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.757 --rc genhtml_branch_coverage=1 00:20:17.757 --rc genhtml_function_coverage=1 00:20:17.757 --rc genhtml_legend=1 00:20:17.757 --rc geninfo_all_blocks=1 00:20:17.757 --rc geninfo_unexecuted_blocks=1 00:20:17.757 00:20:17.757 ' 00:20:17.757 22:20:14 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:17.757 22:20:14 -- nvmf/common.sh@7 -- # uname -s 00:20:17.757 22:20:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.757 22:20:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.757 22:20:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.757 22:20:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.757 22:20:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.757 22:20:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.757 22:20:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.757 22:20:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.757 22:20:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.757 22:20:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.757 22:20:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:20:17.757 22:20:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:20:17.757 22:20:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.757 22:20:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.757 22:20:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:17.757 22:20:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.757 22:20:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.757 22:20:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.757 22:20:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.757 22:20:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.757 22:20:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.757 22:20:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.757 22:20:14 -- paths/export.sh@5 -- # export PATH 00:20:17.757 22:20:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.757 22:20:14 -- nvmf/common.sh@46 -- # : 0 00:20:17.757 22:20:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:17.757 22:20:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:17.757 22:20:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:17.757 22:20:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.757 22:20:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.757 22:20:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:17.757 22:20:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:17.757 22:20:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:17.757 22:20:14 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:17.757 22:20:14 -- host/async_init.sh@14 -- # null_block_size=512 00:20:17.757 22:20:14 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:17.757 22:20:14 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:18.016 22:20:14 -- host/async_init.sh@20 -- # tr -d - 00:20:18.016 22:20:14 -- host/async_init.sh@20 -- # uuidgen 00:20:18.016 22:20:14 -- host/async_init.sh@20 -- # nguid=19a7d612eab34bd4ab308956bc99f373 00:20:18.016 22:20:14 -- host/async_init.sh@22 -- # nvmftestinit 00:20:18.016 22:20:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:18.016 22:20:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.016 22:20:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:18.016 22:20:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:18.016 22:20:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:18.016 22:20:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.016 22:20:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.016 22:20:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.016 22:20:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:18.016 22:20:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:18.016 22:20:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:18.016 22:20:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:18.016 22:20:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:18.016 22:20:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:18.016 22:20:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.016 22:20:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.016 22:20:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:18.016 22:20:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:18.016 22:20:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:18.016 22:20:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:18.016 22:20:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:18.016 22:20:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.016 22:20:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:18.016 22:20:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:18.016 22:20:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:18.016 22:20:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:18.016 22:20:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:18.016 22:20:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:18.016 Cannot find device "nvmf_tgt_br" 00:20:18.016 22:20:14 -- nvmf/common.sh@154 -- # true 00:20:18.016 22:20:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.016 Cannot find device "nvmf_tgt_br2" 00:20:18.016 22:20:14 -- nvmf/common.sh@155 -- # true 00:20:18.016 22:20:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:18.016 22:20:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:18.016 Cannot find device "nvmf_tgt_br" 00:20:18.016 22:20:14 -- nvmf/common.sh@157 -- # true 00:20:18.016 22:20:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:18.016 Cannot find device "nvmf_tgt_br2" 00:20:18.016 22:20:14 -- nvmf/common.sh@158 -- # true 00:20:18.016 22:20:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:18.016 22:20:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:18.016 22:20:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.016 22:20:14 -- nvmf/common.sh@161 -- # true 00:20:18.016 22:20:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.016 22:20:14 -- nvmf/common.sh@162 -- # true 00:20:18.016 22:20:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:18.016 22:20:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:18.016 22:20:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:18.016 22:20:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:18.016 22:20:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:18.016 22:20:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:18.016 22:20:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:18.275 22:20:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:18.275 22:20:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:18.275 22:20:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:18.275 22:20:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:18.275 22:20:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:18.275 22:20:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:18.275 22:20:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.275 22:20:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.275 22:20:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.275 22:20:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:18.275 22:20:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:18.275 22:20:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.275 22:20:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:18.275 22:20:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:18.275 22:20:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:18.275 22:20:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:18.275 22:20:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:18.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:20:18.275 00:20:18.275 --- 10.0.0.2 ping statistics --- 00:20:18.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.275 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:18.275 22:20:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:18.275 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:18.275 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:20:18.275 00:20:18.275 --- 10.0.0.3 ping statistics --- 00:20:18.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.275 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:18.275 22:20:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:18.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:20:18.275 00:20:18.275 --- 10.0.0.1 ping statistics --- 00:20:18.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.275 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:18.275 22:20:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.275 22:20:14 -- nvmf/common.sh@421 -- # return 0 00:20:18.275 22:20:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:18.276 22:20:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.276 22:20:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:18.276 22:20:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:18.276 22:20:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.276 22:20:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:18.276 22:20:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:18.276 22:20:14 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:18.276 22:20:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:18.276 22:20:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.276 22:20:14 -- common/autotest_common.sh@10 -- # set +x 00:20:18.276 22:20:14 -- nvmf/common.sh@469 -- # nvmfpid=82587 00:20:18.276 22:20:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:18.276 22:20:14 -- nvmf/common.sh@470 -- # waitforlisten 82587 00:20:18.276 22:20:14 -- common/autotest_common.sh@829 -- # '[' -z 82587 ']' 00:20:18.276 22:20:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.276 22:20:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.276 22:20:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.276 22:20:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.276 22:20:14 -- common/autotest_common.sh@10 -- # set +x 00:20:18.276 [2024-11-17 22:20:14.828385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:18.276 [2024-11-17 22:20:14.828472] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.535 [2024-11-17 22:20:14.967726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.535 [2024-11-17 22:20:15.097638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:18.535 [2024-11-17 22:20:15.097843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.535 [2024-11-17 22:20:15.097863] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.535 [2024-11-17 22:20:15.097875] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.535 [2024-11-17 22:20:15.097910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.470 22:20:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.470 22:20:15 -- common/autotest_common.sh@862 -- # return 0 00:20:19.470 22:20:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:19.470 22:20:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.470 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:20:19.470 22:20:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.470 22:20:15 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:19.470 22:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.470 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:20:19.470 [2024-11-17 22:20:15.898312] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.470 22:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.470 22:20:15 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:19.470 22:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.470 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:20:19.470 null0 00:20:19.470 22:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.470 22:20:15 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:19.470 22:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.470 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:20:19.470 22:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.470 22:20:15 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:19.470 22:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.470 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:20:19.470 22:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.470 22:20:15 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 19a7d612eab34bd4ab308956bc99f373 00:20:19.470 22:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.470 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:20:19.470 22:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.470 22:20:15 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.470 22:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.470 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:20:19.470 [2024-11-17 22:20:15.938435] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.470 22:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.471 22:20:15 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:19.471 22:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.471 22:20:15 -- common/autotest_common.sh@10 -- # set +x 00:20:19.729 nvme0n1 00:20:19.729 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.729 22:20:16 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:19.729 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.729 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.729 [ 00:20:19.729 { 00:20:19.729 "aliases": [ 00:20:19.729 "19a7d612-eab3-4bd4-ab30-8956bc99f373" 00:20:19.729 ], 00:20:19.729 "assigned_rate_limits": { 00:20:19.729 "r_mbytes_per_sec": 0, 00:20:19.729 "rw_ios_per_sec": 0, 00:20:19.729 "rw_mbytes_per_sec": 0, 00:20:19.729 "w_mbytes_per_sec": 0 00:20:19.729 }, 00:20:19.729 "block_size": 512, 00:20:19.729 "claimed": false, 00:20:19.729 "driver_specific": { 00:20:19.729 "mp_policy": "active_passive", 00:20:19.729 "nvme": [ 00:20:19.729 { 00:20:19.729 "ctrlr_data": { 00:20:19.729 "ana_reporting": false, 00:20:19.729 "cntlid": 1, 00:20:19.729 "firmware_revision": "24.01.1", 00:20:19.729 "model_number": "SPDK bdev Controller", 00:20:19.729 "multi_ctrlr": true, 00:20:19.729 "oacs": { 00:20:19.729 "firmware": 0, 00:20:19.729 "format": 0, 00:20:19.729 "ns_manage": 0, 00:20:19.729 "security": 0 00:20:19.729 }, 00:20:19.729 "serial_number": "00000000000000000000", 00:20:19.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.729 "vendor_id": "0x8086" 00:20:19.729 }, 00:20:19.729 "ns_data": { 00:20:19.729 "can_share": true, 00:20:19.729 "id": 1 00:20:19.729 }, 00:20:19.729 "trid": { 00:20:19.729 "adrfam": "IPv4", 00:20:19.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.730 "traddr": "10.0.0.2", 00:20:19.730 "trsvcid": "4420", 00:20:19.730 "trtype": "TCP" 00:20:19.730 }, 00:20:19.730 "vs": { 00:20:19.730 "nvme_version": "1.3" 00:20:19.730 } 00:20:19.730 } 00:20:19.730 ] 00:20:19.730 }, 00:20:19.730 "name": "nvme0n1", 00:20:19.730 "num_blocks": 2097152, 00:20:19.730 "product_name": "NVMe disk", 00:20:19.730 "supported_io_types": { 00:20:19.730 "abort": true, 00:20:19.730 "compare": true, 00:20:19.730 "compare_and_write": true, 00:20:19.730 "flush": true, 00:20:19.730 "nvme_admin": true, 00:20:19.730 "nvme_io": true, 00:20:19.730 "read": true, 00:20:19.730 "reset": true, 00:20:19.730 "unmap": false, 00:20:19.730 "write": true, 00:20:19.730 "write_zeroes": true 00:20:19.730 }, 00:20:19.730 "uuid": "19a7d612-eab3-4bd4-ab30-8956bc99f373", 00:20:19.730 "zoned": false 00:20:19.730 } 00:20:19.730 ] 00:20:19.730 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.730 22:20:16 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:19.730 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.730 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.730 [2024-11-17 22:20:16.206859] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.730 [2024-11-17 22:20:16.206936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105df90 (9): Bad file descriptor 00:20:19.730 [2024-11-17 22:20:16.338920] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:19.989 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.989 22:20:16 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:19.989 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.989 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.989 [ 00:20:19.989 { 00:20:19.989 "aliases": [ 00:20:19.989 "19a7d612-eab3-4bd4-ab30-8956bc99f373" 00:20:19.989 ], 00:20:19.989 "assigned_rate_limits": { 00:20:19.989 "r_mbytes_per_sec": 0, 00:20:19.989 "rw_ios_per_sec": 0, 00:20:19.989 "rw_mbytes_per_sec": 0, 00:20:19.989 "w_mbytes_per_sec": 0 00:20:19.989 }, 00:20:19.989 "block_size": 512, 00:20:19.989 "claimed": false, 00:20:19.989 "driver_specific": { 00:20:19.989 "mp_policy": "active_passive", 00:20:19.989 "nvme": [ 00:20:19.989 { 00:20:19.989 "ctrlr_data": { 00:20:19.989 "ana_reporting": false, 00:20:19.989 "cntlid": 2, 00:20:19.989 "firmware_revision": "24.01.1", 00:20:19.989 "model_number": "SPDK bdev Controller", 00:20:19.989 "multi_ctrlr": true, 00:20:19.989 "oacs": { 00:20:19.989 "firmware": 0, 00:20:19.989 "format": 0, 00:20:19.989 "ns_manage": 0, 00:20:19.989 "security": 0 00:20:19.989 }, 00:20:19.989 "serial_number": "00000000000000000000", 00:20:19.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.989 "vendor_id": "0x8086" 00:20:19.989 }, 00:20:19.989 "ns_data": { 00:20:19.989 "can_share": true, 00:20:19.989 "id": 1 00:20:19.989 }, 00:20:19.989 "trid": { 00:20:19.989 "adrfam": "IPv4", 00:20:19.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.989 "traddr": "10.0.0.2", 00:20:19.989 "trsvcid": "4420", 00:20:19.989 "trtype": "TCP" 00:20:19.989 }, 00:20:19.989 "vs": { 00:20:19.989 "nvme_version": "1.3" 00:20:19.989 } 00:20:19.989 } 00:20:19.989 ] 00:20:19.989 }, 00:20:19.989 "name": "nvme0n1", 00:20:19.989 "num_blocks": 2097152, 00:20:19.989 "product_name": "NVMe disk", 00:20:19.989 "supported_io_types": { 00:20:19.989 "abort": true, 00:20:19.989 "compare": true, 00:20:19.989 "compare_and_write": true, 00:20:19.989 "flush": true, 00:20:19.989 "nvme_admin": true, 00:20:19.989 "nvme_io": true, 00:20:19.989 "read": true, 00:20:19.989 "reset": true, 00:20:19.989 "unmap": false, 00:20:19.989 "write": true, 00:20:19.989 "write_zeroes": true 00:20:19.989 }, 00:20:19.989 "uuid": "19a7d612-eab3-4bd4-ab30-8956bc99f373", 00:20:19.989 "zoned": false 00:20:19.989 } 00:20:19.989 ] 00:20:19.989 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.989 22:20:16 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.989 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.989 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.989 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.989 22:20:16 -- host/async_init.sh@53 -- # mktemp 00:20:19.989 22:20:16 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.H75VoQbsxQ 00:20:19.989 22:20:16 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:19.989 22:20:16 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.H75VoQbsxQ 00:20:19.989 22:20:16 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:19.989 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.989 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.989 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.989 22:20:16 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:19.989 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.989 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.989 [2024-11-17 22:20:16.406981] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.989 [2024-11-17 22:20:16.407095] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:19.989 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.989 22:20:16 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H75VoQbsxQ 00:20:19.989 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.989 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.989 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.989 22:20:16 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H75VoQbsxQ 00:20:19.989 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.989 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.989 [2024-11-17 22:20:16.422981] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.989 nvme0n1 00:20:19.989 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.989 22:20:16 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:19.989 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.989 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.989 [ 00:20:19.989 { 00:20:19.989 "aliases": [ 00:20:19.989 "19a7d612-eab3-4bd4-ab30-8956bc99f373" 00:20:19.989 ], 00:20:19.989 "assigned_rate_limits": { 00:20:19.989 "r_mbytes_per_sec": 0, 00:20:19.989 "rw_ios_per_sec": 0, 00:20:19.989 "rw_mbytes_per_sec": 0, 00:20:19.989 "w_mbytes_per_sec": 0 00:20:19.989 }, 00:20:19.989 "block_size": 512, 00:20:19.989 "claimed": false, 00:20:19.989 "driver_specific": { 00:20:19.989 "mp_policy": "active_passive", 00:20:19.989 "nvme": [ 00:20:19.989 { 00:20:19.989 "ctrlr_data": { 00:20:19.989 "ana_reporting": false, 00:20:19.989 "cntlid": 3, 00:20:19.989 "firmware_revision": "24.01.1", 00:20:19.989 "model_number": "SPDK bdev Controller", 00:20:19.989 "multi_ctrlr": true, 00:20:19.989 "oacs": { 00:20:19.989 "firmware": 0, 00:20:19.989 "format": 0, 00:20:19.989 "ns_manage": 0, 00:20:19.989 "security": 0 00:20:19.989 }, 00:20:19.989 "serial_number": "00000000000000000000", 00:20:19.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.989 "vendor_id": "0x8086" 00:20:19.989 }, 00:20:19.989 "ns_data": { 00:20:19.989 "can_share": true, 00:20:19.989 "id": 1 00:20:19.989 }, 00:20:19.989 "trid": { 00:20:19.989 "adrfam": "IPv4", 00:20:19.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.989 "traddr": "10.0.0.2", 00:20:19.989 "trsvcid": "4421", 00:20:19.989 "trtype": "TCP" 00:20:19.989 }, 00:20:19.989 "vs": { 00:20:19.989 "nvme_version": "1.3" 00:20:19.989 } 00:20:19.989 } 00:20:19.989 ] 00:20:19.989 }, 00:20:19.989 "name": "nvme0n1", 00:20:19.989 "num_blocks": 2097152, 00:20:19.990 "product_name": "NVMe disk", 00:20:19.990 "supported_io_types": { 00:20:19.990 "abort": true, 00:20:19.990 "compare": true, 00:20:19.990 "compare_and_write": true, 00:20:19.990 "flush": true, 00:20:19.990 "nvme_admin": true, 00:20:19.990 "nvme_io": true, 00:20:19.990 "read": true, 00:20:19.990 "reset": true, 00:20:19.990 "unmap": false, 00:20:19.990 "write": true, 00:20:19.990 "write_zeroes": true 00:20:19.990 }, 00:20:19.990 "uuid": "19a7d612-eab3-4bd4-ab30-8956bc99f373", 00:20:19.990 "zoned": false 00:20:19.990 } 00:20:19.990 ] 00:20:19.990 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.990 22:20:16 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.990 22:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.990 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:19.990 22:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.990 22:20:16 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.H75VoQbsxQ 00:20:19.990 22:20:16 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:19.990 22:20:16 -- host/async_init.sh@78 -- # nvmftestfini 00:20:19.990 22:20:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:19.990 22:20:16 -- nvmf/common.sh@116 -- # sync 00:20:19.990 22:20:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:19.990 22:20:16 -- nvmf/common.sh@119 -- # set +e 00:20:19.990 22:20:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:19.990 22:20:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:19.990 rmmod nvme_tcp 00:20:19.990 rmmod nvme_fabrics 00:20:20.249 rmmod nvme_keyring 00:20:20.249 22:20:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:20.249 22:20:16 -- nvmf/common.sh@123 -- # set -e 00:20:20.249 22:20:16 -- nvmf/common.sh@124 -- # return 0 00:20:20.249 22:20:16 -- nvmf/common.sh@477 -- # '[' -n 82587 ']' 00:20:20.249 22:20:16 -- nvmf/common.sh@478 -- # killprocess 82587 00:20:20.249 22:20:16 -- common/autotest_common.sh@936 -- # '[' -z 82587 ']' 00:20:20.249 22:20:16 -- common/autotest_common.sh@940 -- # kill -0 82587 00:20:20.249 22:20:16 -- common/autotest_common.sh@941 -- # uname 00:20:20.249 22:20:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.249 22:20:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82587 00:20:20.249 22:20:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:20.249 22:20:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:20.249 22:20:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82587' 00:20:20.249 killing process with pid 82587 00:20:20.249 22:20:16 -- common/autotest_common.sh@955 -- # kill 82587 00:20:20.249 22:20:16 -- common/autotest_common.sh@960 -- # wait 82587 00:20:20.507 22:20:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:20.507 22:20:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:20.507 22:20:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:20.507 22:20:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.507 22:20:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:20.507 22:20:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.507 22:20:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.508 22:20:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.508 22:20:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:20.508 00:20:20.508 real 0m2.828s 00:20:20.508 user 0m2.596s 00:20:20.508 sys 0m0.706s 00:20:20.508 22:20:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:20.508 22:20:16 -- common/autotest_common.sh@10 -- # set +x 00:20:20.508 ************************************ 00:20:20.508 END TEST nvmf_async_init 00:20:20.508 ************************************ 00:20:20.508 22:20:17 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.508 22:20:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:20.508 22:20:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:20.508 22:20:17 -- common/autotest_common.sh@10 -- # set +x 00:20:20.508 ************************************ 00:20:20.508 START TEST dma 00:20:20.508 ************************************ 00:20:20.508 22:20:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.508 * Looking for test storage... 00:20:20.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.767 22:20:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:20.767 22:20:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:20.767 22:20:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:20.767 22:20:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:20.767 22:20:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:20.767 22:20:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:20.767 22:20:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:20.768 22:20:17 -- scripts/common.sh@335 -- # IFS=.-: 00:20:20.768 22:20:17 -- scripts/common.sh@335 -- # read -ra ver1 00:20:20.768 22:20:17 -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.768 22:20:17 -- scripts/common.sh@336 -- # read -ra ver2 00:20:20.768 22:20:17 -- scripts/common.sh@337 -- # local 'op=<' 00:20:20.768 22:20:17 -- scripts/common.sh@339 -- # ver1_l=2 00:20:20.768 22:20:17 -- scripts/common.sh@340 -- # ver2_l=1 00:20:20.768 22:20:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:20.768 22:20:17 -- scripts/common.sh@343 -- # case "$op" in 00:20:20.768 22:20:17 -- scripts/common.sh@344 -- # : 1 00:20:20.768 22:20:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:20.768 22:20:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.768 22:20:17 -- scripts/common.sh@364 -- # decimal 1 00:20:20.768 22:20:17 -- scripts/common.sh@352 -- # local d=1 00:20:20.768 22:20:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.768 22:20:17 -- scripts/common.sh@354 -- # echo 1 00:20:20.768 22:20:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:20.768 22:20:17 -- scripts/common.sh@365 -- # decimal 2 00:20:20.768 22:20:17 -- scripts/common.sh@352 -- # local d=2 00:20:20.768 22:20:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.768 22:20:17 -- scripts/common.sh@354 -- # echo 2 00:20:20.768 22:20:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:20.768 22:20:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:20.768 22:20:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:20.768 22:20:17 -- scripts/common.sh@367 -- # return 0 00:20:20.768 22:20:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.768 22:20:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:20.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.768 --rc genhtml_branch_coverage=1 00:20:20.768 --rc genhtml_function_coverage=1 00:20:20.768 --rc genhtml_legend=1 00:20:20.768 --rc geninfo_all_blocks=1 00:20:20.768 --rc geninfo_unexecuted_blocks=1 00:20:20.768 00:20:20.768 ' 00:20:20.768 22:20:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:20.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.768 --rc genhtml_branch_coverage=1 00:20:20.768 --rc genhtml_function_coverage=1 00:20:20.768 --rc genhtml_legend=1 00:20:20.768 --rc geninfo_all_blocks=1 00:20:20.768 --rc geninfo_unexecuted_blocks=1 00:20:20.768 00:20:20.768 ' 00:20:20.768 22:20:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:20.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.768 --rc genhtml_branch_coverage=1 00:20:20.768 --rc genhtml_function_coverage=1 00:20:20.768 --rc genhtml_legend=1 00:20:20.768 --rc geninfo_all_blocks=1 00:20:20.768 --rc geninfo_unexecuted_blocks=1 00:20:20.768 00:20:20.768 ' 00:20:20.768 22:20:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:20.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.768 --rc genhtml_branch_coverage=1 00:20:20.768 --rc genhtml_function_coverage=1 00:20:20.768 --rc genhtml_legend=1 00:20:20.768 --rc geninfo_all_blocks=1 00:20:20.768 --rc geninfo_unexecuted_blocks=1 00:20:20.768 00:20:20.768 ' 00:20:20.768 22:20:17 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.768 22:20:17 -- nvmf/common.sh@7 -- # uname -s 00:20:20.768 22:20:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.768 22:20:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.768 22:20:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.768 22:20:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.768 22:20:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.768 22:20:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.768 22:20:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.768 22:20:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.768 22:20:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.768 22:20:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.768 22:20:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:20:20.768 22:20:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:20:20.768 22:20:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.768 22:20:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.768 22:20:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:20.768 22:20:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:20.768 22:20:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.768 22:20:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.768 22:20:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.768 22:20:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.768 22:20:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.768 22:20:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.768 22:20:17 -- paths/export.sh@5 -- # export PATH 00:20:20.768 22:20:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.768 22:20:17 -- nvmf/common.sh@46 -- # : 0 00:20:20.768 22:20:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:20.768 22:20:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:20.768 22:20:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:20.768 22:20:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.768 22:20:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.768 22:20:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:20.768 22:20:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:20.768 22:20:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:20.768 22:20:17 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:20.768 22:20:17 -- host/dma.sh@13 -- # exit 0 00:20:20.768 00:20:20.768 real 0m0.215s 00:20:20.768 user 0m0.126s 00:20:20.768 sys 0m0.101s 00:20:20.768 22:20:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:20.768 22:20:17 -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 ************************************ 00:20:20.768 END TEST dma 00:20:20.768 ************************************ 00:20:20.768 22:20:17 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:20.768 22:20:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:20.768 22:20:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:20.768 22:20:17 -- common/autotest_common.sh@10 -- # set +x 00:20:20.769 ************************************ 00:20:20.769 START TEST nvmf_identify 00:20:20.769 ************************************ 00:20:20.769 22:20:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:20.769 * Looking for test storage... 00:20:20.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.769 22:20:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:21.028 22:20:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:21.028 22:20:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:21.028 22:20:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:21.028 22:20:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:21.028 22:20:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:21.028 22:20:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:21.028 22:20:17 -- scripts/common.sh@335 -- # IFS=.-: 00:20:21.028 22:20:17 -- scripts/common.sh@335 -- # read -ra ver1 00:20:21.028 22:20:17 -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.028 22:20:17 -- scripts/common.sh@336 -- # read -ra ver2 00:20:21.028 22:20:17 -- scripts/common.sh@337 -- # local 'op=<' 00:20:21.028 22:20:17 -- scripts/common.sh@339 -- # ver1_l=2 00:20:21.028 22:20:17 -- scripts/common.sh@340 -- # ver2_l=1 00:20:21.028 22:20:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:21.028 22:20:17 -- scripts/common.sh@343 -- # case "$op" in 00:20:21.028 22:20:17 -- scripts/common.sh@344 -- # : 1 00:20:21.028 22:20:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:21.028 22:20:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.028 22:20:17 -- scripts/common.sh@364 -- # decimal 1 00:20:21.028 22:20:17 -- scripts/common.sh@352 -- # local d=1 00:20:21.028 22:20:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.028 22:20:17 -- scripts/common.sh@354 -- # echo 1 00:20:21.028 22:20:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:21.028 22:20:17 -- scripts/common.sh@365 -- # decimal 2 00:20:21.028 22:20:17 -- scripts/common.sh@352 -- # local d=2 00:20:21.028 22:20:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.028 22:20:17 -- scripts/common.sh@354 -- # echo 2 00:20:21.028 22:20:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:21.028 22:20:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:21.028 22:20:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:21.028 22:20:17 -- scripts/common.sh@367 -- # return 0 00:20:21.028 22:20:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.028 22:20:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:21.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.028 --rc genhtml_branch_coverage=1 00:20:21.028 --rc genhtml_function_coverage=1 00:20:21.028 --rc genhtml_legend=1 00:20:21.028 --rc geninfo_all_blocks=1 00:20:21.028 --rc geninfo_unexecuted_blocks=1 00:20:21.028 00:20:21.028 ' 00:20:21.028 22:20:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:21.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.028 --rc genhtml_branch_coverage=1 00:20:21.028 --rc genhtml_function_coverage=1 00:20:21.028 --rc genhtml_legend=1 00:20:21.028 --rc geninfo_all_blocks=1 00:20:21.028 --rc geninfo_unexecuted_blocks=1 00:20:21.028 00:20:21.028 ' 00:20:21.028 22:20:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:21.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.028 --rc genhtml_branch_coverage=1 00:20:21.028 --rc genhtml_function_coverage=1 00:20:21.028 --rc genhtml_legend=1 00:20:21.028 --rc geninfo_all_blocks=1 00:20:21.028 --rc geninfo_unexecuted_blocks=1 00:20:21.028 00:20:21.028 ' 00:20:21.028 22:20:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:21.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.028 --rc genhtml_branch_coverage=1 00:20:21.028 --rc genhtml_function_coverage=1 00:20:21.028 --rc genhtml_legend=1 00:20:21.028 --rc geninfo_all_blocks=1 00:20:21.028 --rc geninfo_unexecuted_blocks=1 00:20:21.028 00:20:21.028 ' 00:20:21.028 22:20:17 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.028 22:20:17 -- nvmf/common.sh@7 -- # uname -s 00:20:21.028 22:20:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.028 22:20:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.028 22:20:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.028 22:20:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.028 22:20:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.028 22:20:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.028 22:20:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.028 22:20:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.028 22:20:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.028 22:20:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.028 22:20:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:20:21.028 22:20:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:20:21.028 22:20:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.028 22:20:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.028 22:20:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.028 22:20:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.028 22:20:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.028 22:20:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.028 22:20:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.028 22:20:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.028 22:20:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.028 22:20:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.028 22:20:17 -- paths/export.sh@5 -- # export PATH 00:20:21.029 22:20:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.029 22:20:17 -- nvmf/common.sh@46 -- # : 0 00:20:21.029 22:20:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:21.029 22:20:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:21.029 22:20:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:21.029 22:20:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.029 22:20:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.029 22:20:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:21.029 22:20:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:21.029 22:20:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:21.029 22:20:17 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.029 22:20:17 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.029 22:20:17 -- host/identify.sh@14 -- # nvmftestinit 00:20:21.029 22:20:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:21.029 22:20:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.029 22:20:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:21.029 22:20:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:21.029 22:20:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:21.029 22:20:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.029 22:20:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.029 22:20:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.029 22:20:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:21.029 22:20:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:21.029 22:20:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:21.029 22:20:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:21.029 22:20:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:21.029 22:20:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:21.029 22:20:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.029 22:20:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.029 22:20:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:21.029 22:20:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:21.029 22:20:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.029 22:20:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.029 22:20:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.029 22:20:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.029 22:20:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.029 22:20:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.029 22:20:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.029 22:20:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.029 22:20:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:21.029 22:20:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:21.029 Cannot find device "nvmf_tgt_br" 00:20:21.029 22:20:17 -- nvmf/common.sh@154 -- # true 00:20:21.029 22:20:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.029 Cannot find device "nvmf_tgt_br2" 00:20:21.029 22:20:17 -- nvmf/common.sh@155 -- # true 00:20:21.029 22:20:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:21.029 22:20:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:21.029 Cannot find device "nvmf_tgt_br" 00:20:21.029 22:20:17 -- nvmf/common.sh@157 -- # true 00:20:21.029 22:20:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:21.029 Cannot find device "nvmf_tgt_br2" 00:20:21.029 22:20:17 -- nvmf/common.sh@158 -- # true 00:20:21.029 22:20:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:21.029 22:20:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:21.029 22:20:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.288 22:20:17 -- nvmf/common.sh@161 -- # true 00:20:21.288 22:20:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.288 22:20:17 -- nvmf/common.sh@162 -- # true 00:20:21.288 22:20:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.288 22:20:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.288 22:20:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:21.288 22:20:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:21.288 22:20:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:21.288 22:20:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:21.288 22:20:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:21.288 22:20:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:21.288 22:20:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:21.288 22:20:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:21.288 22:20:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:21.288 22:20:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:21.288 22:20:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:21.288 22:20:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.288 22:20:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.288 22:20:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.288 22:20:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:21.288 22:20:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:21.288 22:20:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.288 22:20:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.288 22:20:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.288 22:20:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.288 22:20:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.288 22:20:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:21.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:20:21.288 00:20:21.288 --- 10.0.0.2 ping statistics --- 00:20:21.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.288 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:21.288 22:20:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:21.288 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.288 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:21.288 00:20:21.288 --- 10.0.0.3 ping statistics --- 00:20:21.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.288 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:21.288 22:20:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:21.288 00:20:21.288 --- 10.0.0.1 ping statistics --- 00:20:21.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.288 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:21.288 22:20:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.288 22:20:17 -- nvmf/common.sh@421 -- # return 0 00:20:21.288 22:20:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:21.288 22:20:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.288 22:20:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:21.289 22:20:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:21.289 22:20:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.289 22:20:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:21.289 22:20:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:21.289 22:20:17 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:21.289 22:20:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.289 22:20:17 -- common/autotest_common.sh@10 -- # set +x 00:20:21.289 22:20:17 -- host/identify.sh@19 -- # nvmfpid=82872 00:20:21.289 22:20:17 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:21.289 22:20:17 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.289 22:20:17 -- host/identify.sh@23 -- # waitforlisten 82872 00:20:21.289 22:20:17 -- common/autotest_common.sh@829 -- # '[' -z 82872 ']' 00:20:21.289 22:20:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.289 22:20:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.289 22:20:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.289 22:20:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.289 22:20:17 -- common/autotest_common.sh@10 -- # set +x 00:20:21.548 [2024-11-17 22:20:17.955525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:21.548 [2024-11-17 22:20:17.955614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.548 [2024-11-17 22:20:18.090729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.807 [2024-11-17 22:20:18.183572] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:21.807 [2024-11-17 22:20:18.183712] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.807 [2024-11-17 22:20:18.183724] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.807 [2024-11-17 22:20:18.183732] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.807 [2024-11-17 22:20:18.183871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.807 [2024-11-17 22:20:18.184365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.807 [2024-11-17 22:20:18.184495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.807 [2024-11-17 22:20:18.184501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.373 22:20:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.373 22:20:18 -- common/autotest_common.sh@862 -- # return 0 00:20:22.373 22:20:18 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.373 22:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.373 22:20:18 -- common/autotest_common.sh@10 -- # set +x 00:20:22.373 [2024-11-17 22:20:18.904153] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.373 22:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.373 22:20:18 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:22.373 22:20:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.373 22:20:18 -- common/autotest_common.sh@10 -- # set +x 00:20:22.373 22:20:18 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:22.373 22:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.373 22:20:18 -- common/autotest_common.sh@10 -- # set +x 00:20:22.633 Malloc0 00:20:22.633 22:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.633 22:20:19 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.633 22:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.633 22:20:19 -- common/autotest_common.sh@10 -- # set +x 00:20:22.633 22:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.633 22:20:19 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:22.633 22:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.633 22:20:19 -- common/autotest_common.sh@10 -- # set +x 00:20:22.633 22:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.633 22:20:19 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.633 22:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.633 22:20:19 -- common/autotest_common.sh@10 -- # set +x 00:20:22.633 [2024-11-17 22:20:19.027366] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.633 22:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.633 22:20:19 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:22.633 22:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.633 22:20:19 -- common/autotest_common.sh@10 -- # set +x 00:20:22.633 22:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.633 22:20:19 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:22.633 22:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.633 22:20:19 -- common/autotest_common.sh@10 -- # set +x 00:20:22.633 [2024-11-17 22:20:19.043096] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:22.633 [ 00:20:22.633 { 00:20:22.633 "allow_any_host": true, 00:20:22.633 "hosts": [], 00:20:22.633 "listen_addresses": [ 00:20:22.633 { 00:20:22.633 "adrfam": "IPv4", 00:20:22.633 "traddr": "10.0.0.2", 00:20:22.633 "transport": "TCP", 00:20:22.633 "trsvcid": "4420", 00:20:22.633 "trtype": "TCP" 00:20:22.633 } 00:20:22.633 ], 00:20:22.633 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:22.633 "subtype": "Discovery" 00:20:22.633 }, 00:20:22.633 { 00:20:22.633 "allow_any_host": true, 00:20:22.633 "hosts": [], 00:20:22.633 "listen_addresses": [ 00:20:22.633 { 00:20:22.633 "adrfam": "IPv4", 00:20:22.633 "traddr": "10.0.0.2", 00:20:22.633 "transport": "TCP", 00:20:22.633 "trsvcid": "4420", 00:20:22.633 "trtype": "TCP" 00:20:22.633 } 00:20:22.633 ], 00:20:22.633 "max_cntlid": 65519, 00:20:22.633 "max_namespaces": 32, 00:20:22.633 "min_cntlid": 1, 00:20:22.633 "model_number": "SPDK bdev Controller", 00:20:22.633 "namespaces": [ 00:20:22.633 { 00:20:22.633 "bdev_name": "Malloc0", 00:20:22.633 "eui64": "ABCDEF0123456789", 00:20:22.633 "name": "Malloc0", 00:20:22.633 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:22.633 "nsid": 1, 00:20:22.633 "uuid": "49818fb0-7011-4e4d-bafb-58b73e07eb93" 00:20:22.633 } 00:20:22.633 ], 00:20:22.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.633 "serial_number": "SPDK00000000000001", 00:20:22.633 "subtype": "NVMe" 00:20:22.633 } 00:20:22.633 ] 00:20:22.633 22:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.633 22:20:19 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:22.633 [2024-11-17 22:20:19.079172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:22.633 [2024-11-17 22:20:19.079234] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82925 ] 00:20:22.633 [2024-11-17 22:20:19.216113] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:22.633 [2024-11-17 22:20:19.216184] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:22.633 [2024-11-17 22:20:19.216190] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:22.633 [2024-11-17 22:20:19.216199] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:22.633 [2024-11-17 22:20:19.216210] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:22.633 [2024-11-17 22:20:19.216360] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:22.633 [2024-11-17 22:20:19.216455] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b11d30 0 00:20:22.633 [2024-11-17 22:20:19.230791] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:22.633 [2024-11-17 22:20:19.230812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:22.633 [2024-11-17 22:20:19.230828] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:22.633 [2024-11-17 22:20:19.230832] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:22.633 [2024-11-17 22:20:19.230881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.633 [2024-11-17 22:20:19.230888] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.633 [2024-11-17 22:20:19.230892] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.633 [2024-11-17 22:20:19.230907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:22.633 [2024-11-17 22:20:19.230937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.633 [2024-11-17 22:20:19.238786] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.633 [2024-11-17 22:20:19.238804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.633 [2024-11-17 22:20:19.238809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.633 [2024-11-17 22:20:19.238822] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b6ff30) on tqpair=0x1b11d30 00:20:22.633 [2024-11-17 22:20:19.238837] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:22.633 [2024-11-17 22:20:19.238844] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:22.633 [2024-11-17 22:20:19.238850] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:22.633 [2024-11-17 22:20:19.238866] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.633 [2024-11-17 22:20:19.238870] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.633 [2024-11-17 22:20:19.238874] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.633 [2024-11-17 22:20:19.238882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-11-17 22:20:19.238909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.633 [2024-11-17 22:20:19.238982] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.633 [2024-11-17 22:20:19.238988] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.633 [2024-11-17 22:20:19.238991] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.633 [2024-11-17 22:20:19.238995] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b6ff30) on tqpair=0x1b11d30 00:20:22.633 [2024-11-17 22:20:19.239001] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:22.633 [2024-11-17 22:20:19.239008] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:22.633 [2024-11-17 22:20:19.239015] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.633 [2024-11-17 22:20:19.239019] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.633 [2024-11-17 22:20:19.239022] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.633 [2024-11-17 22:20:19.239029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-11-17 22:20:19.239047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.633 [2024-11-17 22:20:19.239170] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.633 [2024-11-17 22:20:19.239176] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.633 [2024-11-17 22:20:19.239179] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239183] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b6ff30) on tqpair=0x1b11d30 00:20:22.634 [2024-11-17 22:20:19.239189] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:22.634 [2024-11-17 22:20:19.239196] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:22.634 [2024-11-17 22:20:19.239203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239207] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.634 [2024-11-17 22:20:19.239216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-11-17 22:20:19.239234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.634 [2024-11-17 22:20:19.239300] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.634 [2024-11-17 22:20:19.239306] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.634 [2024-11-17 22:20:19.239310] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239313] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b6ff30) on tqpair=0x1b11d30 00:20:22.634 [2024-11-17 22:20:19.239319] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:22.634 [2024-11-17 22:20:19.239330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239334] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239338] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.634 [2024-11-17 22:20:19.239344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-11-17 22:20:19.239361] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.634 [2024-11-17 22:20:19.239453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.634 [2024-11-17 22:20:19.239459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.634 [2024-11-17 22:20:19.239462] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239466] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b6ff30) on tqpair=0x1b11d30 00:20:22.634 [2024-11-17 22:20:19.239471] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:22.634 [2024-11-17 22:20:19.239476] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:22.634 [2024-11-17 22:20:19.239483] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:22.634 [2024-11-17 22:20:19.239589] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:22.634 [2024-11-17 22:20:19.239594] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:22.634 [2024-11-17 22:20:19.239603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239611] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.634 [2024-11-17 22:20:19.239618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-11-17 22:20:19.239637] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.634 [2024-11-17 22:20:19.239716] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.634 [2024-11-17 22:20:19.239722] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.634 [2024-11-17 22:20:19.239725] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239729] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b6ff30) on tqpair=0x1b11d30 00:20:22.634 [2024-11-17 22:20:19.239735] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:22.634 [2024-11-17 22:20:19.239744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.634 [2024-11-17 22:20:19.239773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-11-17 22:20:19.239808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.634 [2024-11-17 22:20:19.239886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.634 [2024-11-17 22:20:19.239892] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.634 [2024-11-17 22:20:19.239896] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239899] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b6ff30) on tqpair=0x1b11d30 00:20:22.634 [2024-11-17 22:20:19.239904] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:22.634 [2024-11-17 22:20:19.239909] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:22.634 [2024-11-17 22:20:19.239917] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:22.634 [2024-11-17 22:20:19.239932] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:22.634 [2024-11-17 22:20:19.239943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239946] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.239950] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.634 [2024-11-17 22:20:19.239956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-11-17 22:20:19.239975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.634 [2024-11-17 22:20:19.240094] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.634 [2024-11-17 22:20:19.240106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.634 [2024-11-17 22:20:19.240111] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240115] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b11d30): datao=0, datal=4096, cccid=0 00:20:22.634 [2024-11-17 22:20:19.240120] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b6ff30) on tqpair(0x1b11d30): expected_datao=0, payload_size=4096 00:20:22.634 [2024-11-17 22:20:19.240129] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240133] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240158] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.634 [2024-11-17 22:20:19.240164] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.634 [2024-11-17 22:20:19.240168] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240171] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b6ff30) on tqpair=0x1b11d30 00:20:22.634 [2024-11-17 22:20:19.240181] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:22.634 [2024-11-17 22:20:19.240186] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:22.634 [2024-11-17 22:20:19.240190] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:22.634 [2024-11-17 22:20:19.240196] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:22.634 [2024-11-17 22:20:19.240201] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:22.634 [2024-11-17 22:20:19.240206] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:22.634 [2024-11-17 22:20:19.240219] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:22.634 [2024-11-17 22:20:19.240227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240231] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240235] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.634 [2024-11-17 22:20:19.240258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:22.634 [2024-11-17 22:20:19.240281] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.634 [2024-11-17 22:20:19.240347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.634 [2024-11-17 22:20:19.240353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.634 [2024-11-17 22:20:19.240357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240361] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b6ff30) on tqpair=0x1b11d30 00:20:22.634 [2024-11-17 22:20:19.240370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240374] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240378] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b11d30) 00:20:22.634 [2024-11-17 22:20:19.240384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.634 [2024-11-17 22:20:19.240390] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240394] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b11d30) 00:20:22.634 [2024-11-17 22:20:19.240403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.634 [2024-11-17 22:20:19.240409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b11d30) 00:20:22.634 [2024-11-17 22:20:19.240422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.634 [2024-11-17 22:20:19.240427] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.634 [2024-11-17 22:20:19.240431] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240434] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.635 [2024-11-17 22:20:19.240440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.635 [2024-11-17 22:20:19.240445] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:22.635 [2024-11-17 22:20:19.240458] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:22.635 [2024-11-17 22:20:19.240464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b11d30) 00:20:22.635 [2024-11-17 22:20:19.240478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-11-17 22:20:19.240499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6ff30, cid 0, qid 0 00:20:22.635 [2024-11-17 22:20:19.240506] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70090, cid 1, qid 0 00:20:22.635 [2024-11-17 22:20:19.240510] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b701f0, cid 2, qid 0 00:20:22.635 [2024-11-17 22:20:19.240515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.635 [2024-11-17 22:20:19.240519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b704b0, cid 4, qid 0 00:20:22.635 [2024-11-17 22:20:19.240649] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.635 [2024-11-17 22:20:19.240658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.635 [2024-11-17 22:20:19.240664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b704b0) on tqpair=0x1b11d30 00:20:22.635 [2024-11-17 22:20:19.240679] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:22.635 [2024-11-17 22:20:19.240687] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:22.635 [2024-11-17 22:20:19.240703] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240711] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240716] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b11d30) 00:20:22.635 [2024-11-17 22:20:19.240726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-11-17 22:20:19.240758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b704b0, cid 4, qid 0 00:20:22.635 [2024-11-17 22:20:19.240868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.635 [2024-11-17 22:20:19.240881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.635 [2024-11-17 22:20:19.240885] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240889] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b11d30): datao=0, datal=4096, cccid=4 00:20:22.635 [2024-11-17 22:20:19.240893] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b704b0) on tqpair(0x1b11d30): expected_datao=0, payload_size=4096 00:20:22.635 [2024-11-17 22:20:19.240901] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240905] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240913] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.635 [2024-11-17 22:20:19.240919] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.635 [2024-11-17 22:20:19.240922] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.240926] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b704b0) on tqpair=0x1b11d30 00:20:22.635 [2024-11-17 22:20:19.240957] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:22.635 [2024-11-17 22:20:19.241003] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.241009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.241013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b11d30) 00:20:22.635 [2024-11-17 22:20:19.241020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-11-17 22:20:19.241028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.241031] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.241035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b11d30) 00:20:22.635 [2024-11-17 22:20:19.241040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.635 [2024-11-17 22:20:19.241069] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b704b0, cid 4, qid 0 00:20:22.635 [2024-11-17 22:20:19.241077] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70610, cid 5, qid 0 00:20:22.635 [2024-11-17 22:20:19.241180] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.635 [2024-11-17 22:20:19.241186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.635 [2024-11-17 22:20:19.241190] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.241193] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b11d30): datao=0, datal=1024, cccid=4 00:20:22.635 [2024-11-17 22:20:19.241197] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b704b0) on tqpair(0x1b11d30): expected_datao=0, payload_size=1024 00:20:22.635 [2024-11-17 22:20:19.241204] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.241208] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.241214] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.635 [2024-11-17 22:20:19.241219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.635 [2024-11-17 22:20:19.241223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.635 [2024-11-17 22:20:19.241226] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70610) on tqpair=0x1b11d30 00:20:22.895 [2024-11-17 22:20:19.281831] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.895 [2024-11-17 22:20:19.281853] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.895 [2024-11-17 22:20:19.281858] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.281862] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b704b0) on tqpair=0x1b11d30 00:20:22.895 [2024-11-17 22:20:19.281884] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.281890] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.281893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b11d30) 00:20:22.895 [2024-11-17 22:20:19.281902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.895 [2024-11-17 22:20:19.281960] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b704b0, cid 4, qid 0 00:20:22.895 [2024-11-17 22:20:19.282040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.895 [2024-11-17 22:20:19.282047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.895 [2024-11-17 22:20:19.282051] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.282054] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b11d30): datao=0, datal=3072, cccid=4 00:20:22.895 [2024-11-17 22:20:19.282059] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b704b0) on tqpair(0x1b11d30): expected_datao=0, payload_size=3072 00:20:22.895 [2024-11-17 22:20:19.282066] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.282070] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.282078] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.895 [2024-11-17 22:20:19.282084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.895 [2024-11-17 22:20:19.282087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.282091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b704b0) on tqpair=0x1b11d30 00:20:22.895 [2024-11-17 22:20:19.282102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.282107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.282110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b11d30) 00:20:22.895 [2024-11-17 22:20:19.282117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.895 [2024-11-17 22:20:19.282144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b704b0, cid 4, qid 0 00:20:22.895 [2024-11-17 22:20:19.282261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.895 [2024-11-17 22:20:19.282267] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.895 [2024-11-17 22:20:19.282270] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.282274] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b11d30): datao=0, datal=8, cccid=4 00:20:22.895 [2024-11-17 22:20:19.282278] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b704b0) on tqpair(0x1b11d30): expected_datao=0, payload_size=8 00:20:22.895 [2024-11-17 22:20:19.282285] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.895 [2024-11-17 22:20:19.282288] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.895 ===================================================== 00:20:22.895 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:22.895 ===================================================== 00:20:22.895 Controller Capabilities/Features 00:20:22.895 ================================ 00:20:22.895 Vendor ID: 0000 00:20:22.896 Subsystem Vendor ID: 0000 00:20:22.896 Serial Number: .................... 00:20:22.896 Model Number: ........................................ 00:20:22.896 Firmware Version: 24.01.1 00:20:22.896 Recommended Arb Burst: 0 00:20:22.896 IEEE OUI Identifier: 00 00 00 00:20:22.896 Multi-path I/O 00:20:22.896 May have multiple subsystem ports: No 00:20:22.896 May have multiple controllers: No 00:20:22.896 Associated with SR-IOV VF: No 00:20:22.896 Max Data Transfer Size: 131072 00:20:22.896 Max Number of Namespaces: 0 00:20:22.896 Max Number of I/O Queues: 1024 00:20:22.896 NVMe Specification Version (VS): 1.3 00:20:22.896 NVMe Specification Version (Identify): 1.3 00:20:22.896 Maximum Queue Entries: 128 00:20:22.896 Contiguous Queues Required: Yes 00:20:22.896 Arbitration Mechanisms Supported 00:20:22.896 Weighted Round Robin: Not Supported 00:20:22.896 Vendor Specific: Not Supported 00:20:22.896 Reset Timeout: 15000 ms 00:20:22.896 Doorbell Stride: 4 bytes 00:20:22.896 NVM Subsystem Reset: Not Supported 00:20:22.896 Command Sets Supported 00:20:22.896 NVM Command Set: Supported 00:20:22.896 Boot Partition: Not Supported 00:20:22.896 Memory Page Size Minimum: 4096 bytes 00:20:22.896 Memory Page Size Maximum: 4096 bytes 00:20:22.896 Persistent Memory Region: Not Supported 00:20:22.896 Optional Asynchronous Events Supported 00:20:22.896 Namespace Attribute Notices: Not Supported 00:20:22.896 Firmware Activation Notices: Not Supported 00:20:22.896 ANA Change Notices: Not Supported 00:20:22.896 PLE Aggregate Log Change Notices: Not Supported 00:20:22.896 LBA Status Info Alert Notices: Not Supported 00:20:22.896 EGE Aggregate Log Change Notices: Not Supported 00:20:22.896 Normal NVM Subsystem Shutdown event: Not Supported 00:20:22.896 Zone Descriptor Change Notices: Not Supported 00:20:22.896 Discovery Log Change Notices: Supported 00:20:22.896 Controller Attributes 00:20:22.896 128-bit Host Identifier: Not Supported 00:20:22.896 Non-Operational Permissive Mode: Not Supported 00:20:22.896 NVM Sets: Not Supported 00:20:22.896 Read Recovery Levels: Not Supported 00:20:22.896 Endurance Groups: Not Supported 00:20:22.896 Predictable Latency Mode: Not Supported 00:20:22.896 Traffic Based Keep ALive: Not Supported 00:20:22.896 Namespace Granularity: Not Supported 00:20:22.896 SQ Associations: Not Supported 00:20:22.896 UUID List: Not Supported 00:20:22.896 Multi-Domain Subsystem: Not Supported 00:20:22.896 Fixed Capacity Management: Not Supported 00:20:22.896 Variable Capacity Management: Not Supported 00:20:22.896 Delete Endurance Group: Not Supported 00:20:22.896 Delete NVM Set: Not Supported 00:20:22.896 Extended LBA Formats Supported: Not Supported 00:20:22.896 Flexible Data Placement Supported: Not Supported 00:20:22.896 00:20:22.896 Controller Memory Buffer Support 00:20:22.896 ================================ 00:20:22.896 Supported: No 00:20:22.896 00:20:22.896 Persistent Memory Region Support 00:20:22.896 ================================ 00:20:22.896 Supported: No 00:20:22.896 00:20:22.896 Admin Command Set Attributes 00:20:22.896 ============================ 00:20:22.896 Security Send/Receive: Not Supported 00:20:22.896 Format NVM: Not Supported 00:20:22.896 Firmware Activate/Download: Not Supported 00:20:22.896 Namespace Management: Not Supported 00:20:22.896 Device Self-Test: Not Supported 00:20:22.896 Directives: Not Supported 00:20:22.896 NVMe-MI: Not Supported 00:20:22.896 Virtualization Management: Not Supported 00:20:22.896 Doorbell Buffer Config: Not Supported 00:20:22.896 Get LBA Status Capability: Not Supported 00:20:22.896 Command & Feature Lockdown Capability: Not Supported 00:20:22.896 Abort Command Limit: 1 00:20:22.896 Async Event Request Limit: 4 00:20:22.896 Number of Firmware Slots: N/A 00:20:22.896 Firmware Slot 1 Read-Only: N/A 00:20:22.896 Firmware Activation Without Reset: N/A 00:20:22.896 Multiple Update Detection Support: N/A 00:20:22.896 Firmware Update Granularity: No Information Provided 00:20:22.896 Per-Namespace SMART Log: No 00:20:22.896 Asymmetric Namespace Access Log Page: Not Supported 00:20:22.896 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:22.896 Command Effects Log Page: Not Supported 00:20:22.896 Get Log Page Extended Data: Supported 00:20:22.896 Telemetry Log Pages: Not Supported 00:20:22.896 Persistent Event Log Pages: Not Supported 00:20:22.896 Supported Log Pages Log Page: May Support 00:20:22.896 Commands Supported & Effects Log Page: Not Supported 00:20:22.896 Feature Identifiers & Effects Log Page:May Support 00:20:22.896 NVMe-MI Commands & Effects Log Page: May Support 00:20:22.896 Data Area 4 for Telemetry Log: Not Supported 00:20:22.896 Error Log Page Entries Supported: 128 00:20:22.896 Keep Alive: Not Supported 00:20:22.896 00:20:22.896 NVM Command Set Attributes 00:20:22.896 ========================== 00:20:22.896 Submission Queue Entry Size 00:20:22.896 Max: 1 00:20:22.896 Min: 1 00:20:22.896 Completion Queue Entry Size 00:20:22.896 Max: 1 00:20:22.896 Min: 1 00:20:22.896 Number of Namespaces: 0 00:20:22.896 Compare Command: Not Supported 00:20:22.896 Write Uncorrectable Command: Not Supported 00:20:22.896 Dataset Management Command: Not Supported 00:20:22.896 Write Zeroes Command: Not Supported 00:20:22.896 Set Features Save Field: Not Supported 00:20:22.896 Reservations: Not Supported 00:20:22.896 Timestamp: Not Supported 00:20:22.896 Copy: Not Supported 00:20:22.896 Volatile Write Cache: Not Present 00:20:22.896 Atomic Write Unit (Normal): 1 00:20:22.896 Atomic Write Unit (PFail): 1 00:20:22.896 Atomic Compare & Write Unit: 1 00:20:22.896 Fused Compare & Write: Supported 00:20:22.896 Scatter-Gather List 00:20:22.896 SGL Command Set: Supported 00:20:22.896 SGL Keyed: Supported 00:20:22.896 SGL Bit Bucket Descriptor: Not Supported 00:20:22.896 SGL Metadata Pointer: Not Supported 00:20:22.896 Oversized SGL: Not Supported 00:20:22.896 SGL Metadata Address: Not Supported 00:20:22.896 SGL Offset: Supported 00:20:22.896 Transport SGL Data Block: Not Supported 00:20:22.896 Replay Protected Memory Block: Not Supported 00:20:22.896 00:20:22.896 Firmware Slot Information 00:20:22.896 ========================= 00:20:22.896 Active slot: 0 00:20:22.896 00:20:22.896 00:20:22.896 Error Log 00:20:22.896 ========= 00:20:22.896 00:20:22.896 Active Namespaces 00:20:22.896 ================= 00:20:22.896 Discovery Log Page 00:20:22.896 ================== 00:20:22.896 Generation Counter: 2 00:20:22.896 Number of Records: 2 00:20:22.896 Record Format: 0 00:20:22.896 00:20:22.896 Discovery Log Entry 0 00:20:22.896 ---------------------- 00:20:22.896 Transport Type: 3 (TCP) 00:20:22.896 Address Family: 1 (IPv4) 00:20:22.896 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:22.896 Entry Flags: 00:20:22.896 Duplicate Returned Information: 1 00:20:22.896 Explicit Persistent Connection Support for Discovery: 1 00:20:22.896 Transport Requirements: 00:20:22.896 Secure Channel: Not Required 00:20:22.896 Port ID: 0 (0x0000) 00:20:22.896 Controller ID: 65535 (0xffff) 00:20:22.896 Admin Max SQ Size: 128 00:20:22.896 Transport Service Identifier: 4420 00:20:22.896 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:22.896 Transport Address: 10.0.0.2 00:20:22.896 Discovery Log Entry 1 00:20:22.896 ---------------------- 00:20:22.896 Transport Type: 3 (TCP) 00:20:22.896 Address Family: 1 (IPv4) 00:20:22.896 Subsystem Type: 2 (NVM Subsystem) 00:20:22.896 Entry Flags: 00:20:22.896 Duplicate Returned Information: 0 00:20:22.896 Explicit Persistent Connection Support for Discovery: 0 00:20:22.896 Transport Requirements: 00:20:22.896 Secure Channel: Not Required 00:20:22.896 Port ID: 0 (0x0000) 00:20:22.896 Controller ID: 65535 (0xffff) 00:20:22.896 Admin Max SQ Size: 128 00:20:22.896 Transport Service Identifier: 4420 00:20:22.896 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:22.896 Transport Address: 10.0.0.2 [2024-11-17 22:20:19.326798] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.896 [2024-11-17 22:20:19.326817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.896 [2024-11-17 22:20:19.326822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.896 [2024-11-17 22:20:19.326826] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b704b0) on tqpair=0x1b11d30 00:20:22.896 [2024-11-17 22:20:19.326927] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:22.896 [2024-11-17 22:20:19.326943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.896 [2024-11-17 22:20:19.326950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.896 [2024-11-17 22:20:19.326955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.896 [2024-11-17 22:20:19.326960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.896 [2024-11-17 22:20:19.326969] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.896 [2024-11-17 22:20:19.326973] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.896 [2024-11-17 22:20:19.326977] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.896 [2024-11-17 22:20:19.326984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.896 [2024-11-17 22:20:19.327014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.896 [2024-11-17 22:20:19.327085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.327091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.327095] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327098] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.327107] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327110] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.327120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.327145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.327253] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.327258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.327261] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327265] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.327270] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:22.897 [2024-11-17 22:20:19.327275] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:22.897 [2024-11-17 22:20:19.327284] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327291] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.327297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.327315] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.327377] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.327383] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.327386] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327390] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.327400] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.327414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.327431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.327503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.327509] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.327512] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327516] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.327525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.327539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.327555] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.327611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.327616] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.327620] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327623] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.327632] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327637] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.327646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.327663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.327721] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.327727] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.327730] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327733] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.327756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.327771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.327790] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.327862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.327868] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.327871] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327875] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.327884] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327888] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327892] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.327898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.327915] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.327985] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.327990] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.327993] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.327997] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.328006] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.328019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.328036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.328094] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.328100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.328103] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328106] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.328115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.328129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.328146] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.328205] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.328211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.328215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328218] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.328227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328231] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328235] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.328241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.328258] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.328322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.328328] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.328331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328334] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.328343] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328347] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328351] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.328357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.328373] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.328432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.328443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.328447] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.328461] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.328475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.328492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.328555] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.328560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.328564] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328567] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.328576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328580] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328584] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.328590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.328607] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.328669] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.328674] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.328678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328681] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.328690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328694] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328698] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.328704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.328721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.328794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.328802] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.328805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328809] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.328819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.328833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.328851] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.328912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.328917] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.328920] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328924] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.897 [2024-11-17 22:20:19.328933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328937] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.328940] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.897 [2024-11-17 22:20:19.328946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.897 [2024-11-17 22:20:19.328963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.897 [2024-11-17 22:20:19.329028] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.897 [2024-11-17 22:20:19.329033] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.897 [2024-11-17 22:20:19.329037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.897 [2024-11-17 22:20:19.329040] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.329049] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329053] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.329063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.329079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.329138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.329148] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.329152] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329156] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.329166] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329170] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329174] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.329180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.329198] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.329278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.329284] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.329287] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329291] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.329300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329303] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329307] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.329313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.329330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.329384] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.329390] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.329394] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329397] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.329406] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329410] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329414] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.329420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.329437] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.329494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.329500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.329503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329506] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.329516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329520] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.329529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.329547] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.329607] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.329612] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.329615] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329619] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.329628] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329632] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329636] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.329642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.329658] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.329747] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.329756] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.329760] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329763] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.329785] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.329799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.329819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.329883] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.329889] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.329892] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329896] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.329906] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329910] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.329913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.329945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.329965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.330036] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.330042] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.330046] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330049] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.330059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.330073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.330091] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.330167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.330174] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.330178] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330181] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.330191] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330196] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330200] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.330206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.330224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.330298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.330304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.330307] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330311] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.330337] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330340] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330344] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.330350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.330367] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.330432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.330438] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.330441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330444] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.330454] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330457] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.330467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.330484] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.330548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.330554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.330557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.330570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330577] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.330583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.330601] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.330660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.330667] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.330670] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.330683] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330687] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.330690] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.330696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.330713] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.334799] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.334817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.334822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.334834] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.334846] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.334850] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.334854] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b11d30) 00:20:22.898 [2024-11-17 22:20:19.334861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.898 [2024-11-17 22:20:19.334885] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b70350, cid 3, qid 0 00:20:22.898 [2024-11-17 22:20:19.334946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.898 [2024-11-17 22:20:19.334952] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.898 [2024-11-17 22:20:19.334955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.898 [2024-11-17 22:20:19.334959] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b70350) on tqpair=0x1b11d30 00:20:22.898 [2024-11-17 22:20:19.334966] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:22.898 00:20:22.898 22:20:19 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:22.898 [2024-11-17 22:20:19.370663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:22.898 [2024-11-17 22:20:19.370917] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82927 ] 00:20:23.160 [2024-11-17 22:20:19.507670] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:23.160 [2024-11-17 22:20:19.507768] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.160 [2024-11-17 22:20:19.507785] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.160 [2024-11-17 22:20:19.507796] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.160 [2024-11-17 22:20:19.507805] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.160 [2024-11-17 22:20:19.507900] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:23.160 [2024-11-17 22:20:19.507946] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1006d30 0 00:20:23.160 [2024-11-17 22:20:19.515145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.160 [2024-11-17 22:20:19.515170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.160 [2024-11-17 22:20:19.515185] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.160 [2024-11-17 22:20:19.515189] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.160 [2024-11-17 22:20:19.515248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.160 [2024-11-17 22:20:19.515255] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.160 [2024-11-17 22:20:19.515258] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.160 [2024-11-17 22:20:19.515269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.160 [2024-11-17 22:20:19.515298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.160 [2024-11-17 22:20:19.522784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.160 [2024-11-17 22:20:19.522804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.160 [2024-11-17 22:20:19.522809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.160 [2024-11-17 22:20:19.522813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1064f30) on tqpair=0x1006d30 00:20:23.160 [2024-11-17 22:20:19.522822] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.160 [2024-11-17 22:20:19.522829] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:23.161 [2024-11-17 22:20:19.522834] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:23.161 [2024-11-17 22:20:19.522848] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.522853] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.522856] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.161 [2024-11-17 22:20:19.522864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.161 [2024-11-17 22:20:19.522890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.161 [2024-11-17 22:20:19.522959] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.161 [2024-11-17 22:20:19.522966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.161 [2024-11-17 22:20:19.522969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.522973] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1064f30) on tqpair=0x1006d30 00:20:23.161 [2024-11-17 22:20:19.522978] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:23.161 [2024-11-17 22:20:19.522985] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:23.161 [2024-11-17 22:20:19.522992] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.522995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.522999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.161 [2024-11-17 22:20:19.523005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.161 [2024-11-17 22:20:19.523023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.161 [2024-11-17 22:20:19.523087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.161 [2024-11-17 22:20:19.523101] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.161 [2024-11-17 22:20:19.523104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523107] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1064f30) on tqpair=0x1006d30 00:20:23.161 [2024-11-17 22:20:19.523113] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:23.161 [2024-11-17 22:20:19.523121] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.161 [2024-11-17 22:20:19.523127] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.161 [2024-11-17 22:20:19.523140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.161 [2024-11-17 22:20:19.523160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.161 [2024-11-17 22:20:19.523216] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.161 [2024-11-17 22:20:19.523222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.161 [2024-11-17 22:20:19.523225] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1064f30) on tqpair=0x1006d30 00:20:23.161 [2024-11-17 22:20:19.523235] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.161 [2024-11-17 22:20:19.523243] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523247] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523251] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.161 [2024-11-17 22:20:19.523257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.161 [2024-11-17 22:20:19.523273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.161 [2024-11-17 22:20:19.523330] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.161 [2024-11-17 22:20:19.523336] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.161 [2024-11-17 22:20:19.523340] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523343] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1064f30) on tqpair=0x1006d30 00:20:23.161 [2024-11-17 22:20:19.523348] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:23.161 [2024-11-17 22:20:19.523353] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:23.161 [2024-11-17 22:20:19.523359] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.161 [2024-11-17 22:20:19.523465] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:23.161 [2024-11-17 22:20:19.523469] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.161 [2024-11-17 22:20:19.523476] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523483] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.161 [2024-11-17 22:20:19.523489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.161 [2024-11-17 22:20:19.523506] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.161 [2024-11-17 22:20:19.523571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.161 [2024-11-17 22:20:19.523576] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.161 [2024-11-17 22:20:19.523579] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523583] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1064f30) on tqpair=0x1006d30 00:20:23.161 [2024-11-17 22:20:19.523588] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.161 [2024-11-17 22:20:19.523597] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.161 [2024-11-17 22:20:19.523611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.161 [2024-11-17 22:20:19.523626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.161 [2024-11-17 22:20:19.523689] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.161 [2024-11-17 22:20:19.523695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.161 [2024-11-17 22:20:19.523698] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523702] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1064f30) on tqpair=0x1006d30 00:20:23.161 [2024-11-17 22:20:19.523707] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.161 [2024-11-17 22:20:19.523711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:23.161 [2024-11-17 22:20:19.523718] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:23.161 [2024-11-17 22:20:19.523732] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.161 [2024-11-17 22:20:19.523761] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523766] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523769] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.161 [2024-11-17 22:20:19.523783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.161 [2024-11-17 22:20:19.523803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.161 [2024-11-17 22:20:19.523910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.161 [2024-11-17 22:20:19.523916] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.161 [2024-11-17 22:20:19.523919] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523923] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1006d30): datao=0, datal=4096, cccid=0 00:20:23.161 [2024-11-17 22:20:19.523927] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1064f30) on tqpair(0x1006d30): expected_datao=0, payload_size=4096 00:20:23.161 [2024-11-17 22:20:19.523934] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523938] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.161 [2024-11-17 22:20:19.523950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.161 [2024-11-17 22:20:19.523953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.523957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1064f30) on tqpair=0x1006d30 00:20:23.161 [2024-11-17 22:20:19.523965] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:23.161 [2024-11-17 22:20:19.523970] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:23.161 [2024-11-17 22:20:19.523974] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:23.161 [2024-11-17 22:20:19.523977] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:23.161 [2024-11-17 22:20:19.523981] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:23.161 [2024-11-17 22:20:19.523986] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:23.161 [2024-11-17 22:20:19.523998] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.161 [2024-11-17 22:20:19.524005] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.524009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.161 [2024-11-17 22:20:19.524012] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.161 [2024-11-17 22:20:19.524019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.162 [2024-11-17 22:20:19.524037] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.162 [2024-11-17 22:20:19.524103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.162 [2024-11-17 22:20:19.524109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.162 [2024-11-17 22:20:19.524112] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524116] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1064f30) on tqpair=0x1006d30 00:20:23.162 [2024-11-17 22:20:19.524123] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524126] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524130] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1006d30) 00:20:23.162 [2024-11-17 22:20:19.524136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.162 [2024-11-17 22:20:19.524143] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524146] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1006d30) 00:20:23.162 [2024-11-17 22:20:19.524154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.162 [2024-11-17 22:20:19.524160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524163] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524166] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1006d30) 00:20:23.162 [2024-11-17 22:20:19.524171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.162 [2024-11-17 22:20:19.524176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524179] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1006d30) 00:20:23.162 [2024-11-17 22:20:19.524188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.162 [2024-11-17 22:20:19.524192] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524204] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524210] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1006d30) 00:20:23.162 [2024-11-17 22:20:19.524223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.162 [2024-11-17 22:20:19.524242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1064f30, cid 0, qid 0 00:20:23.162 [2024-11-17 22:20:19.524248] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065090, cid 1, qid 0 00:20:23.162 [2024-11-17 22:20:19.524252] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10651f0, cid 2, qid 0 00:20:23.162 [2024-11-17 22:20:19.524257] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065350, cid 3, qid 0 00:20:23.162 [2024-11-17 22:20:19.524261] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10654b0, cid 4, qid 0 00:20:23.162 [2024-11-17 22:20:19.524357] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.162 [2024-11-17 22:20:19.524363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.162 [2024-11-17 22:20:19.524366] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10654b0) on tqpair=0x1006d30 00:20:23.162 [2024-11-17 22:20:19.524375] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:23.162 [2024-11-17 22:20:19.524379] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524387] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524397] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524403] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524407] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524410] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1006d30) 00:20:23.162 [2024-11-17 22:20:19.524417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.162 [2024-11-17 22:20:19.524433] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10654b0, cid 4, qid 0 00:20:23.162 [2024-11-17 22:20:19.524487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.162 [2024-11-17 22:20:19.524493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.162 [2024-11-17 22:20:19.524496] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524500] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10654b0) on tqpair=0x1006d30 00:20:23.162 [2024-11-17 22:20:19.524549] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524565] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524577] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524581] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1006d30) 00:20:23.162 [2024-11-17 22:20:19.524587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.162 [2024-11-17 22:20:19.524605] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10654b0, cid 4, qid 0 00:20:23.162 [2024-11-17 22:20:19.524677] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.162 [2024-11-17 22:20:19.524683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.162 [2024-11-17 22:20:19.524686] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524690] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1006d30): datao=0, datal=4096, cccid=4 00:20:23.162 [2024-11-17 22:20:19.524694] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10654b0) on tqpair(0x1006d30): expected_datao=0, payload_size=4096 00:20:23.162 [2024-11-17 22:20:19.524701] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524704] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524711] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.162 [2024-11-17 22:20:19.524716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.162 [2024-11-17 22:20:19.524719] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10654b0) on tqpair=0x1006d30 00:20:23.162 [2024-11-17 22:20:19.524756] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:23.162 [2024-11-17 22:20:19.524768] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524778] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1006d30) 00:20:23.162 [2024-11-17 22:20:19.524800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.162 [2024-11-17 22:20:19.524820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10654b0, cid 4, qid 0 00:20:23.162 [2024-11-17 22:20:19.524898] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.162 [2024-11-17 22:20:19.524904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.162 [2024-11-17 22:20:19.524907] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524911] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1006d30): datao=0, datal=4096, cccid=4 00:20:23.162 [2024-11-17 22:20:19.524915] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10654b0) on tqpair(0x1006d30): expected_datao=0, payload_size=4096 00:20:23.162 [2024-11-17 22:20:19.524921] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524925] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524932] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.162 [2024-11-17 22:20:19.524937] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.162 [2024-11-17 22:20:19.524940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524943] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10654b0) on tqpair=0x1006d30 00:20:23.162 [2024-11-17 22:20:19.524959] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524969] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:23.162 [2024-11-17 22:20:19.524976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.524984] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1006d30) 00:20:23.162 [2024-11-17 22:20:19.524991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.162 [2024-11-17 22:20:19.525009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10654b0, cid 4, qid 0 00:20:23.162 [2024-11-17 22:20:19.525071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.162 [2024-11-17 22:20:19.525077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.162 [2024-11-17 22:20:19.525080] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.162 [2024-11-17 22:20:19.525084] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1006d30): datao=0, datal=4096, cccid=4 00:20:23.162 [2024-11-17 22:20:19.525088] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10654b0) on tqpair(0x1006d30): expected_datao=0, payload_size=4096 00:20:23.162 [2024-11-17 22:20:19.525094] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525098] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.163 [2024-11-17 22:20:19.525110] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.163 [2024-11-17 22:20:19.525112] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525116] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10654b0) on tqpair=0x1006d30 00:20:23.163 [2024-11-17 22:20:19.525125] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:23.163 [2024-11-17 22:20:19.525132] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:23.163 [2024-11-17 22:20:19.525143] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:23.163 [2024-11-17 22:20:19.525149] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:23.163 [2024-11-17 22:20:19.525154] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:23.163 [2024-11-17 22:20:19.525158] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:23.163 [2024-11-17 22:20:19.525163] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:23.163 [2024-11-17 22:20:19.525168] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:23.163 [2024-11-17 22:20:19.525181] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1006d30) 00:20:23.163 [2024-11-17 22:20:19.525195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.163 [2024-11-17 22:20:19.525201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1006d30) 00:20:23.163 [2024-11-17 22:20:19.525212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.163 [2024-11-17 22:20:19.525235] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10654b0, cid 4, qid 0 00:20:23.163 [2024-11-17 22:20:19.525242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065610, cid 5, qid 0 00:20:23.163 [2024-11-17 22:20:19.525314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.163 [2024-11-17 22:20:19.525320] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.163 [2024-11-17 22:20:19.525324] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10654b0) on tqpair=0x1006d30 00:20:23.163 [2024-11-17 22:20:19.525334] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.163 [2024-11-17 22:20:19.525339] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.163 [2024-11-17 22:20:19.525342] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1065610) on tqpair=0x1006d30 00:20:23.163 [2024-11-17 22:20:19.525354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525358] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525361] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1006d30) 00:20:23.163 [2024-11-17 22:20:19.525367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.163 [2024-11-17 22:20:19.525384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065610, cid 5, qid 0 00:20:23.163 [2024-11-17 22:20:19.525441] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.163 [2024-11-17 22:20:19.525447] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.163 [2024-11-17 22:20:19.525450] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525453] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1065610) on tqpair=0x1006d30 00:20:23.163 [2024-11-17 22:20:19.525463] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1006d30) 00:20:23.163 [2024-11-17 22:20:19.525476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.163 [2024-11-17 22:20:19.525491] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065610, cid 5, qid 0 00:20:23.163 [2024-11-17 22:20:19.525548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.163 [2024-11-17 22:20:19.525554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.163 [2024-11-17 22:20:19.525557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525561] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1065610) on tqpair=0x1006d30 00:20:23.163 [2024-11-17 22:20:19.525570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525577] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1006d30) 00:20:23.163 [2024-11-17 22:20:19.525583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.163 [2024-11-17 22:20:19.525598] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065610, cid 5, qid 0 00:20:23.163 [2024-11-17 22:20:19.525653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.163 [2024-11-17 22:20:19.525659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.163 [2024-11-17 22:20:19.525662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525665] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1065610) on tqpair=0x1006d30 00:20:23.163 [2024-11-17 22:20:19.525677] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525681] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525684] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1006d30) 00:20:23.163 [2024-11-17 22:20:19.525690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.163 [2024-11-17 22:20:19.525697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525700] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1006d30) 00:20:23.163 [2024-11-17 22:20:19.525709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.163 [2024-11-17 22:20:19.525715] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1006d30) 00:20:23.163 [2024-11-17 22:20:19.525727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.163 [2024-11-17 22:20:19.525733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1006d30) 00:20:23.163 [2024-11-17 22:20:19.525763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.163 [2024-11-17 22:20:19.525783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065610, cid 5, qid 0 00:20:23.163 [2024-11-17 22:20:19.525789] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10654b0, cid 4, qid 0 00:20:23.163 [2024-11-17 22:20:19.525794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065770, cid 6, qid 0 00:20:23.163 [2024-11-17 22:20:19.525798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10658d0, cid 7, qid 0 00:20:23.163 [2024-11-17 22:20:19.525977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.163 [2024-11-17 22:20:19.525986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.163 [2024-11-17 22:20:19.525990] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.525993] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1006d30): datao=0, datal=8192, cccid=5 00:20:23.163 [2024-11-17 22:20:19.525998] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1065610) on tqpair(0x1006d30): expected_datao=0, payload_size=8192 00:20:23.163 [2024-11-17 22:20:19.526015] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.526019] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.526025] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.163 [2024-11-17 22:20:19.526030] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.163 [2024-11-17 22:20:19.526033] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.526037] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1006d30): datao=0, datal=512, cccid=4 00:20:23.163 [2024-11-17 22:20:19.526041] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10654b0) on tqpair(0x1006d30): expected_datao=0, payload_size=512 00:20:23.163 [2024-11-17 22:20:19.526048] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.526051] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.526056] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.163 [2024-11-17 22:20:19.526061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.163 [2024-11-17 22:20:19.526065] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.526068] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1006d30): datao=0, datal=512, cccid=6 00:20:23.163 [2024-11-17 22:20:19.526072] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1065770) on tqpair(0x1006d30): expected_datao=0, payload_size=512 00:20:23.163 [2024-11-17 22:20:19.526079] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.526082] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.163 [2024-11-17 22:20:19.526087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.163 [2024-11-17 22:20:19.526092] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.164 [2024-11-17 22:20:19.526096] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.164 [2024-11-17 22:20:19.526099] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1006d30): datao=0, datal=4096, cccid=7 00:20:23.164 [2024-11-17 22:20:19.526103] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10658d0) on tqpair(0x1006d30): expected_datao=0, payload_size=4096 00:20:23.164 [2024-11-17 22:20:19.526109] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.164 [2024-11-17 22:20:19.526113] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.164 [2024-11-17 22:20:19.526120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.164 [2024-11-17 22:20:19.526125] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.164 [2024-11-17 22:20:19.526128] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.164 [2024-11-17 22:20:19.526132] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1065610) on tqpair=0x1006d30 00:20:23.164 [2024-11-17 22:20:19.526152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.164 [2024-11-17 22:20:19.526158] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.164 [2024-11-17 22:20:19.526161] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.164 [2024-11-17 22:20:19.526165] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10654b0) on tqpair=0x1006d30 00:20:23.164 [2024-11-17 22:20:19.526175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.164 [2024-11-17 22:20:19.526181] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.164 [2024-11-17 22:20:19.526184] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.164 [2024-11-17 22:20:19.526188] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1065770) on tqpair=0x1006d30 00:20:23.164 [2024-11-17 22:20:19.526195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.164 [2024-11-17 22:20:19.526200] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.164 [2024-11-17 22:20:19.526214] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.164 [2024-11-17 22:20:19.526218] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10658d0) on tqpair=0x1006d30 00:20:23.164 ===================================================== 00:20:23.164 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.164 ===================================================== 00:20:23.164 Controller Capabilities/Features 00:20:23.164 ================================ 00:20:23.164 Vendor ID: 8086 00:20:23.164 Subsystem Vendor ID: 8086 00:20:23.164 Serial Number: SPDK00000000000001 00:20:23.164 Model Number: SPDK bdev Controller 00:20:23.164 Firmware Version: 24.01.1 00:20:23.164 Recommended Arb Burst: 6 00:20:23.164 IEEE OUI Identifier: e4 d2 5c 00:20:23.164 Multi-path I/O 00:20:23.164 May have multiple subsystem ports: Yes 00:20:23.164 May have multiple controllers: Yes 00:20:23.164 Associated with SR-IOV VF: No 00:20:23.164 Max Data Transfer Size: 131072 00:20:23.164 Max Number of Namespaces: 32 00:20:23.164 Max Number of I/O Queues: 127 00:20:23.164 NVMe Specification Version (VS): 1.3 00:20:23.164 NVMe Specification Version (Identify): 1.3 00:20:23.164 Maximum Queue Entries: 128 00:20:23.164 Contiguous Queues Required: Yes 00:20:23.164 Arbitration Mechanisms Supported 00:20:23.164 Weighted Round Robin: Not Supported 00:20:23.164 Vendor Specific: Not Supported 00:20:23.164 Reset Timeout: 15000 ms 00:20:23.164 Doorbell Stride: 4 bytes 00:20:23.164 NVM Subsystem Reset: Not Supported 00:20:23.164 Command Sets Supported 00:20:23.164 NVM Command Set: Supported 00:20:23.164 Boot Partition: Not Supported 00:20:23.164 Memory Page Size Minimum: 4096 bytes 00:20:23.164 Memory Page Size Maximum: 4096 bytes 00:20:23.164 Persistent Memory Region: Not Supported 00:20:23.164 Optional Asynchronous Events Supported 00:20:23.164 Namespace Attribute Notices: Supported 00:20:23.164 Firmware Activation Notices: Not Supported 00:20:23.164 ANA Change Notices: Not Supported 00:20:23.164 PLE Aggregate Log Change Notices: Not Supported 00:20:23.164 LBA Status Info Alert Notices: Not Supported 00:20:23.164 EGE Aggregate Log Change Notices: Not Supported 00:20:23.164 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.164 Zone Descriptor Change Notices: Not Supported 00:20:23.164 Discovery Log Change Notices: Not Supported 00:20:23.164 Controller Attributes 00:20:23.164 128-bit Host Identifier: Supported 00:20:23.164 Non-Operational Permissive Mode: Not Supported 00:20:23.164 NVM Sets: Not Supported 00:20:23.164 Read Recovery Levels: Not Supported 00:20:23.164 Endurance Groups: Not Supported 00:20:23.164 Predictable Latency Mode: Not Supported 00:20:23.164 Traffic Based Keep ALive: Not Supported 00:20:23.164 Namespace Granularity: Not Supported 00:20:23.164 SQ Associations: Not Supported 00:20:23.164 UUID List: Not Supported 00:20:23.164 Multi-Domain Subsystem: Not Supported 00:20:23.164 Fixed Capacity Management: Not Supported 00:20:23.164 Variable Capacity Management: Not Supported 00:20:23.164 Delete Endurance Group: Not Supported 00:20:23.164 Delete NVM Set: Not Supported 00:20:23.164 Extended LBA Formats Supported: Not Supported 00:20:23.164 Flexible Data Placement Supported: Not Supported 00:20:23.164 00:20:23.164 Controller Memory Buffer Support 00:20:23.164 ================================ 00:20:23.164 Supported: No 00:20:23.164 00:20:23.164 Persistent Memory Region Support 00:20:23.164 ================================ 00:20:23.164 Supported: No 00:20:23.164 00:20:23.164 Admin Command Set Attributes 00:20:23.164 ============================ 00:20:23.164 Security Send/Receive: Not Supported 00:20:23.164 Format NVM: Not Supported 00:20:23.164 Firmware Activate/Download: Not Supported 00:20:23.164 Namespace Management: Not Supported 00:20:23.164 Device Self-Test: Not Supported 00:20:23.164 Directives: Not Supported 00:20:23.164 NVMe-MI: Not Supported 00:20:23.164 Virtualization Management: Not Supported 00:20:23.164 Doorbell Buffer Config: Not Supported 00:20:23.164 Get LBA Status Capability: Not Supported 00:20:23.164 Command & Feature Lockdown Capability: Not Supported 00:20:23.164 Abort Command Limit: 4 00:20:23.164 Async Event Request Limit: 4 00:20:23.164 Number of Firmware Slots: N/A 00:20:23.164 Firmware Slot 1 Read-Only: N/A 00:20:23.164 Firmware Activation Without Reset: N/A 00:20:23.164 Multiple Update Detection Support: N/A 00:20:23.164 Firmware Update Granularity: No Information Provided 00:20:23.164 Per-Namespace SMART Log: No 00:20:23.164 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.164 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:23.164 Command Effects Log Page: Supported 00:20:23.164 Get Log Page Extended Data: Supported 00:20:23.164 Telemetry Log Pages: Not Supported 00:20:23.164 Persistent Event Log Pages: Not Supported 00:20:23.164 Supported Log Pages Log Page: May Support 00:20:23.164 Commands Supported & Effects Log Page: Not Supported 00:20:23.164 Feature Identifiers & Effects Log Page:May Support 00:20:23.164 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.164 Data Area 4 for Telemetry Log: Not Supported 00:20:23.164 Error Log Page Entries Supported: 128 00:20:23.164 Keep Alive: Supported 00:20:23.164 Keep Alive Granularity: 10000 ms 00:20:23.164 00:20:23.164 NVM Command Set Attributes 00:20:23.164 ========================== 00:20:23.164 Submission Queue Entry Size 00:20:23.164 Max: 64 00:20:23.164 Min: 64 00:20:23.164 Completion Queue Entry Size 00:20:23.164 Max: 16 00:20:23.164 Min: 16 00:20:23.164 Number of Namespaces: 32 00:20:23.164 Compare Command: Supported 00:20:23.164 Write Uncorrectable Command: Not Supported 00:20:23.164 Dataset Management Command: Supported 00:20:23.164 Write Zeroes Command: Supported 00:20:23.164 Set Features Save Field: Not Supported 00:20:23.164 Reservations: Supported 00:20:23.164 Timestamp: Not Supported 00:20:23.164 Copy: Supported 00:20:23.164 Volatile Write Cache: Present 00:20:23.164 Atomic Write Unit (Normal): 1 00:20:23.164 Atomic Write Unit (PFail): 1 00:20:23.164 Atomic Compare & Write Unit: 1 00:20:23.164 Fused Compare & Write: Supported 00:20:23.164 Scatter-Gather List 00:20:23.164 SGL Command Set: Supported 00:20:23.164 SGL Keyed: Supported 00:20:23.164 SGL Bit Bucket Descriptor: Not Supported 00:20:23.164 SGL Metadata Pointer: Not Supported 00:20:23.164 Oversized SGL: Not Supported 00:20:23.164 SGL Metadata Address: Not Supported 00:20:23.164 SGL Offset: Supported 00:20:23.164 Transport SGL Data Block: Not Supported 00:20:23.164 Replay Protected Memory Block: Not Supported 00:20:23.164 00:20:23.164 Firmware Slot Information 00:20:23.164 ========================= 00:20:23.164 Active slot: 1 00:20:23.164 Slot 1 Firmware Revision: 24.01.1 00:20:23.164 00:20:23.164 00:20:23.164 Commands Supported and Effects 00:20:23.164 ============================== 00:20:23.164 Admin Commands 00:20:23.164 -------------- 00:20:23.164 Get Log Page (02h): Supported 00:20:23.164 Identify (06h): Supported 00:20:23.164 Abort (08h): Supported 00:20:23.164 Set Features (09h): Supported 00:20:23.164 Get Features (0Ah): Supported 00:20:23.165 Asynchronous Event Request (0Ch): Supported 00:20:23.165 Keep Alive (18h): Supported 00:20:23.165 I/O Commands 00:20:23.165 ------------ 00:20:23.165 Flush (00h): Supported LBA-Change 00:20:23.165 Write (01h): Supported LBA-Change 00:20:23.165 Read (02h): Supported 00:20:23.165 Compare (05h): Supported 00:20:23.165 Write Zeroes (08h): Supported LBA-Change 00:20:23.165 Dataset Management (09h): Supported LBA-Change 00:20:23.165 Copy (19h): Supported LBA-Change 00:20:23.165 Unknown (79h): Supported LBA-Change 00:20:23.165 Unknown (7Ah): Supported 00:20:23.165 00:20:23.165 Error Log 00:20:23.165 ========= 00:20:23.165 00:20:23.165 Arbitration 00:20:23.165 =========== 00:20:23.165 Arbitration Burst: 1 00:20:23.165 00:20:23.165 Power Management 00:20:23.165 ================ 00:20:23.165 Number of Power States: 1 00:20:23.165 Current Power State: Power State #0 00:20:23.165 Power State #0: 00:20:23.165 Max Power: 0.00 W 00:20:23.165 Non-Operational State: Operational 00:20:23.165 Entry Latency: Not Reported 00:20:23.165 Exit Latency: Not Reported 00:20:23.165 Relative Read Throughput: 0 00:20:23.165 Relative Read Latency: 0 00:20:23.165 Relative Write Throughput: 0 00:20:23.165 Relative Write Latency: 0 00:20:23.165 Idle Power: Not Reported 00:20:23.165 Active Power: Not Reported 00:20:23.165 Non-Operational Permissive Mode: Not Supported 00:20:23.165 00:20:23.165 Health Information 00:20:23.165 ================== 00:20:23.165 Critical Warnings: 00:20:23.165 Available Spare Space: OK 00:20:23.165 Temperature: OK 00:20:23.165 Device Reliability: OK 00:20:23.165 Read Only: No 00:20:23.165 Volatile Memory Backup: OK 00:20:23.165 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:23.165 Temperature Threshold: [2024-11-17 22:20:19.526345] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.526352] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.526355] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1006d30) 00:20:23.165 [2024-11-17 22:20:19.526362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.165 [2024-11-17 22:20:19.526385] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10658d0, cid 7, qid 0 00:20:23.165 [2024-11-17 22:20:19.526470] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.165 [2024-11-17 22:20:19.526476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.165 [2024-11-17 22:20:19.526479] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.526483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10658d0) on tqpair=0x1006d30 00:20:23.165 [2024-11-17 22:20:19.526514] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:23.165 [2024-11-17 22:20:19.526525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.165 [2024-11-17 22:20:19.526531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.165 [2024-11-17 22:20:19.526536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.165 [2024-11-17 22:20:19.526541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.165 [2024-11-17 22:20:19.526549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.526552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.526556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1006d30) 00:20:23.165 [2024-11-17 22:20:19.526562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.165 [2024-11-17 22:20:19.526581] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065350, cid 3, qid 0 00:20:23.165 [2024-11-17 22:20:19.526639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.165 [2024-11-17 22:20:19.526645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.165 [2024-11-17 22:20:19.526648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.526651] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1065350) on tqpair=0x1006d30 00:20:23.165 [2024-11-17 22:20:19.526659] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.526663] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.526666] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1006d30) 00:20:23.165 [2024-11-17 22:20:19.526672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.165 [2024-11-17 22:20:19.526691] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065350, cid 3, qid 0 00:20:23.165 [2024-11-17 22:20:19.530814] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.165 [2024-11-17 22:20:19.530831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.165 [2024-11-17 22:20:19.530836] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.530839] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1065350) on tqpair=0x1006d30 00:20:23.165 [2024-11-17 22:20:19.530845] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:23.165 [2024-11-17 22:20:19.530850] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:23.165 [2024-11-17 22:20:19.530861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.530865] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.530869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1006d30) 00:20:23.165 [2024-11-17 22:20:19.530876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.165 [2024-11-17 22:20:19.530901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1065350, cid 3, qid 0 00:20:23.165 [2024-11-17 22:20:19.530957] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.165 [2024-11-17 22:20:19.530963] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.165 [2024-11-17 22:20:19.530966] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.165 [2024-11-17 22:20:19.530970] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1065350) on tqpair=0x1006d30 00:20:23.165 [2024-11-17 22:20:19.530978] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:20:23.165 0 Kelvin (-273 Celsius) 00:20:23.165 Available Spare: 0% 00:20:23.165 Available Spare Threshold: 0% 00:20:23.165 Life Percentage Used: 0% 00:20:23.165 Data Units Read: 0 00:20:23.165 Data Units Written: 0 00:20:23.165 Host Read Commands: 0 00:20:23.165 Host Write Commands: 0 00:20:23.165 Controller Busy Time: 0 minutes 00:20:23.166 Power Cycles: 0 00:20:23.166 Power On Hours: 0 hours 00:20:23.166 Unsafe Shutdowns: 0 00:20:23.166 Unrecoverable Media Errors: 0 00:20:23.166 Lifetime Error Log Entries: 0 00:20:23.166 Warning Temperature Time: 0 minutes 00:20:23.166 Critical Temperature Time: 0 minutes 00:20:23.166 00:20:23.166 Number of Queues 00:20:23.166 ================ 00:20:23.166 Number of I/O Submission Queues: 127 00:20:23.166 Number of I/O Completion Queues: 127 00:20:23.166 00:20:23.166 Active Namespaces 00:20:23.166 ================= 00:20:23.166 Namespace ID:1 00:20:23.166 Error Recovery Timeout: Unlimited 00:20:23.166 Command Set Identifier: NVM (00h) 00:20:23.166 Deallocate: Supported 00:20:23.166 Deallocated/Unwritten Error: Not Supported 00:20:23.166 Deallocated Read Value: Unknown 00:20:23.166 Deallocate in Write Zeroes: Not Supported 00:20:23.166 Deallocated Guard Field: 0xFFFF 00:20:23.166 Flush: Supported 00:20:23.166 Reservation: Supported 00:20:23.166 Namespace Sharing Capabilities: Multiple Controllers 00:20:23.166 Size (in LBAs): 131072 (0GiB) 00:20:23.166 Capacity (in LBAs): 131072 (0GiB) 00:20:23.166 Utilization (in LBAs): 131072 (0GiB) 00:20:23.166 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:23.166 EUI64: ABCDEF0123456789 00:20:23.166 UUID: 49818fb0-7011-4e4d-bafb-58b73e07eb93 00:20:23.166 Thin Provisioning: Not Supported 00:20:23.166 Per-NS Atomic Units: Yes 00:20:23.166 Atomic Boundary Size (Normal): 0 00:20:23.166 Atomic Boundary Size (PFail): 0 00:20:23.166 Atomic Boundary Offset: 0 00:20:23.166 Maximum Single Source Range Length: 65535 00:20:23.166 Maximum Copy Length: 65535 00:20:23.166 Maximum Source Range Count: 1 00:20:23.166 NGUID/EUI64 Never Reused: No 00:20:23.166 Namespace Write Protected: No 00:20:23.166 Number of LBA Formats: 1 00:20:23.166 Current LBA Format: LBA Format #00 00:20:23.166 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:23.166 00:20:23.166 22:20:19 -- host/identify.sh@51 -- # sync 00:20:23.166 22:20:19 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.166 22:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.166 22:20:19 -- common/autotest_common.sh@10 -- # set +x 00:20:23.166 22:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.166 22:20:19 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:23.166 22:20:19 -- host/identify.sh@56 -- # nvmftestfini 00:20:23.166 22:20:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:23.166 22:20:19 -- nvmf/common.sh@116 -- # sync 00:20:23.166 22:20:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:23.166 22:20:19 -- nvmf/common.sh@119 -- # set +e 00:20:23.166 22:20:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:23.166 22:20:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:23.166 rmmod nvme_tcp 00:20:23.166 rmmod nvme_fabrics 00:20:23.166 rmmod nvme_keyring 00:20:23.166 22:20:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:23.166 22:20:19 -- nvmf/common.sh@123 -- # set -e 00:20:23.166 22:20:19 -- nvmf/common.sh@124 -- # return 0 00:20:23.166 22:20:19 -- nvmf/common.sh@477 -- # '[' -n 82872 ']' 00:20:23.166 22:20:19 -- nvmf/common.sh@478 -- # killprocess 82872 00:20:23.166 22:20:19 -- common/autotest_common.sh@936 -- # '[' -z 82872 ']' 00:20:23.166 22:20:19 -- common/autotest_common.sh@940 -- # kill -0 82872 00:20:23.166 22:20:19 -- common/autotest_common.sh@941 -- # uname 00:20:23.166 22:20:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:23.166 22:20:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82872 00:20:23.166 22:20:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:23.166 22:20:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:23.166 22:20:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82872' 00:20:23.166 killing process with pid 82872 00:20:23.166 22:20:19 -- common/autotest_common.sh@955 -- # kill 82872 00:20:23.166 [2024-11-17 22:20:19.713353] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:23.166 22:20:19 -- common/autotest_common.sh@960 -- # wait 82872 00:20:23.733 22:20:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:23.733 22:20:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:23.733 22:20:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:23.733 22:20:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.733 22:20:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:23.733 22:20:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.733 22:20:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.733 22:20:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.733 22:20:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:23.733 00:20:23.733 real 0m2.825s 00:20:23.733 user 0m7.620s 00:20:23.733 sys 0m0.748s 00:20:23.733 22:20:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:23.733 22:20:20 -- common/autotest_common.sh@10 -- # set +x 00:20:23.733 ************************************ 00:20:23.733 END TEST nvmf_identify 00:20:23.733 ************************************ 00:20:23.733 22:20:20 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:23.733 22:20:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:23.733 22:20:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:23.733 22:20:20 -- common/autotest_common.sh@10 -- # set +x 00:20:23.733 ************************************ 00:20:23.733 START TEST nvmf_perf 00:20:23.733 ************************************ 00:20:23.733 22:20:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:23.733 * Looking for test storage... 00:20:23.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.733 22:20:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:23.733 22:20:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:23.734 22:20:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:23.734 22:20:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:23.734 22:20:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:23.734 22:20:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:23.734 22:20:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:23.734 22:20:20 -- scripts/common.sh@335 -- # IFS=.-: 00:20:23.734 22:20:20 -- scripts/common.sh@335 -- # read -ra ver1 00:20:23.734 22:20:20 -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.734 22:20:20 -- scripts/common.sh@336 -- # read -ra ver2 00:20:23.734 22:20:20 -- scripts/common.sh@337 -- # local 'op=<' 00:20:23.734 22:20:20 -- scripts/common.sh@339 -- # ver1_l=2 00:20:23.734 22:20:20 -- scripts/common.sh@340 -- # ver2_l=1 00:20:23.734 22:20:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:23.734 22:20:20 -- scripts/common.sh@343 -- # case "$op" in 00:20:23.734 22:20:20 -- scripts/common.sh@344 -- # : 1 00:20:23.734 22:20:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:23.734 22:20:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.734 22:20:20 -- scripts/common.sh@364 -- # decimal 1 00:20:23.992 22:20:20 -- scripts/common.sh@352 -- # local d=1 00:20:23.992 22:20:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.992 22:20:20 -- scripts/common.sh@354 -- # echo 1 00:20:23.992 22:20:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:23.993 22:20:20 -- scripts/common.sh@365 -- # decimal 2 00:20:23.993 22:20:20 -- scripts/common.sh@352 -- # local d=2 00:20:23.993 22:20:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.993 22:20:20 -- scripts/common.sh@354 -- # echo 2 00:20:23.993 22:20:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:23.993 22:20:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:23.993 22:20:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:23.993 22:20:20 -- scripts/common.sh@367 -- # return 0 00:20:23.993 22:20:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.993 22:20:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:23.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.993 --rc genhtml_branch_coverage=1 00:20:23.993 --rc genhtml_function_coverage=1 00:20:23.993 --rc genhtml_legend=1 00:20:23.993 --rc geninfo_all_blocks=1 00:20:23.993 --rc geninfo_unexecuted_blocks=1 00:20:23.993 00:20:23.993 ' 00:20:23.993 22:20:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:23.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.993 --rc genhtml_branch_coverage=1 00:20:23.993 --rc genhtml_function_coverage=1 00:20:23.993 --rc genhtml_legend=1 00:20:23.993 --rc geninfo_all_blocks=1 00:20:23.993 --rc geninfo_unexecuted_blocks=1 00:20:23.993 00:20:23.993 ' 00:20:23.993 22:20:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:23.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.993 --rc genhtml_branch_coverage=1 00:20:23.993 --rc genhtml_function_coverage=1 00:20:23.993 --rc genhtml_legend=1 00:20:23.993 --rc geninfo_all_blocks=1 00:20:23.993 --rc geninfo_unexecuted_blocks=1 00:20:23.993 00:20:23.993 ' 00:20:23.993 22:20:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:23.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.993 --rc genhtml_branch_coverage=1 00:20:23.993 --rc genhtml_function_coverage=1 00:20:23.993 --rc genhtml_legend=1 00:20:23.993 --rc geninfo_all_blocks=1 00:20:23.993 --rc geninfo_unexecuted_blocks=1 00:20:23.993 00:20:23.993 ' 00:20:23.993 22:20:20 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.993 22:20:20 -- nvmf/common.sh@7 -- # uname -s 00:20:23.993 22:20:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.993 22:20:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.993 22:20:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.993 22:20:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.993 22:20:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.993 22:20:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.993 22:20:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.993 22:20:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.993 22:20:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.993 22:20:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.993 22:20:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:20:23.993 22:20:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:20:23.993 22:20:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.993 22:20:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.993 22:20:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.993 22:20:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.993 22:20:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.993 22:20:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.993 22:20:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.993 22:20:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.993 22:20:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.993 22:20:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.993 22:20:20 -- paths/export.sh@5 -- # export PATH 00:20:23.993 22:20:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.993 22:20:20 -- nvmf/common.sh@46 -- # : 0 00:20:23.993 22:20:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:23.993 22:20:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:23.993 22:20:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:23.993 22:20:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.993 22:20:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.993 22:20:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:23.993 22:20:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:23.993 22:20:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:23.993 22:20:20 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:23.993 22:20:20 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:23.993 22:20:20 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:23.993 22:20:20 -- host/perf.sh@17 -- # nvmftestinit 00:20:23.993 22:20:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:23.993 22:20:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.993 22:20:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:23.993 22:20:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:23.993 22:20:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:23.993 22:20:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.993 22:20:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.993 22:20:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.993 22:20:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:23.993 22:20:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:23.993 22:20:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:23.993 22:20:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:23.993 22:20:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:23.993 22:20:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:23.993 22:20:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.993 22:20:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.993 22:20:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:23.993 22:20:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:23.993 22:20:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.993 22:20:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.993 22:20:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.993 22:20:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.993 22:20:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.993 22:20:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.993 22:20:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.993 22:20:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.993 22:20:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:23.993 22:20:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:23.993 Cannot find device "nvmf_tgt_br" 00:20:23.993 22:20:20 -- nvmf/common.sh@154 -- # true 00:20:23.993 22:20:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.993 Cannot find device "nvmf_tgt_br2" 00:20:23.993 22:20:20 -- nvmf/common.sh@155 -- # true 00:20:23.993 22:20:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:23.993 22:20:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:23.993 Cannot find device "nvmf_tgt_br" 00:20:23.993 22:20:20 -- nvmf/common.sh@157 -- # true 00:20:23.993 22:20:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:23.993 Cannot find device "nvmf_tgt_br2" 00:20:23.993 22:20:20 -- nvmf/common.sh@158 -- # true 00:20:23.993 22:20:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:23.993 22:20:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:23.993 22:20:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.993 22:20:20 -- nvmf/common.sh@161 -- # true 00:20:23.993 22:20:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.993 22:20:20 -- nvmf/common.sh@162 -- # true 00:20:23.993 22:20:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.993 22:20:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.993 22:20:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.993 22:20:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.993 22:20:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.993 22:20:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.993 22:20:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.993 22:20:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:23.993 22:20:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:23.994 22:20:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:23.994 22:20:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:23.994 22:20:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:23.994 22:20:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:24.252 22:20:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.252 22:20:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.252 22:20:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.252 22:20:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:24.252 22:20:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:24.252 22:20:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.252 22:20:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.252 22:20:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.252 22:20:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.252 22:20:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.252 22:20:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:24.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:24.252 00:20:24.252 --- 10.0.0.2 ping statistics --- 00:20:24.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.252 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:24.252 22:20:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:24.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.252 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:20:24.252 00:20:24.252 --- 10.0.0.3 ping statistics --- 00:20:24.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.252 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:24.252 22:20:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:24.252 00:20:24.252 --- 10.0.0.1 ping statistics --- 00:20:24.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.252 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:24.252 22:20:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.252 22:20:20 -- nvmf/common.sh@421 -- # return 0 00:20:24.252 22:20:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:24.252 22:20:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.252 22:20:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:24.252 22:20:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:24.252 22:20:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.252 22:20:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:24.252 22:20:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:24.252 22:20:20 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:24.252 22:20:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:24.252 22:20:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.252 22:20:20 -- common/autotest_common.sh@10 -- # set +x 00:20:24.252 22:20:20 -- nvmf/common.sh@469 -- # nvmfpid=83105 00:20:24.252 22:20:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.252 22:20:20 -- nvmf/common.sh@470 -- # waitforlisten 83105 00:20:24.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.252 22:20:20 -- common/autotest_common.sh@829 -- # '[' -z 83105 ']' 00:20:24.252 22:20:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.252 22:20:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.252 22:20:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.252 22:20:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.252 22:20:20 -- common/autotest_common.sh@10 -- # set +x 00:20:24.252 [2024-11-17 22:20:20.787974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:24.252 [2024-11-17 22:20:20.788061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.510 [2024-11-17 22:20:20.923582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.510 [2024-11-17 22:20:21.013655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:24.510 [2024-11-17 22:20:21.013822] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.510 [2024-11-17 22:20:21.013836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.510 [2024-11-17 22:20:21.013845] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.510 [2024-11-17 22:20:21.013966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.510 [2024-11-17 22:20:21.014411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.510 [2024-11-17 22:20:21.014561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.511 [2024-11-17 22:20:21.014568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.445 22:20:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.445 22:20:21 -- common/autotest_common.sh@862 -- # return 0 00:20:25.445 22:20:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:25.445 22:20:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.445 22:20:21 -- common/autotest_common.sh@10 -- # set +x 00:20:25.445 22:20:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.445 22:20:21 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:25.445 22:20:21 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:25.704 22:20:22 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:25.704 22:20:22 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:25.962 22:20:22 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:25.962 22:20:22 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:26.221 22:20:22 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:26.221 22:20:22 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:26.221 22:20:22 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:26.221 22:20:22 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:26.221 22:20:22 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:26.479 [2024-11-17 22:20:22.895348] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.479 22:20:22 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.737 22:20:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:26.737 22:20:23 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:26.995 22:20:23 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:26.995 22:20:23 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:27.254 22:20:23 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.513 [2024-11-17 22:20:23.873355] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.513 22:20:23 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:27.513 22:20:24 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:27.513 22:20:24 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:27.513 22:20:24 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:27.513 22:20:24 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:28.888 Initializing NVMe Controllers 00:20:28.888 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:28.888 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:28.888 Initialization complete. Launching workers. 00:20:28.888 ======================================================== 00:20:28.888 Latency(us) 00:20:28.888 Device Information : IOPS MiB/s Average min max 00:20:28.888 PCIE (0000:00:06.0) NSID 1 from core 0: 20832.79 81.38 1535.48 414.44 7141.72 00:20:28.888 ======================================================== 00:20:28.889 Total : 20832.79 81.38 1535.48 414.44 7141.72 00:20:28.889 00:20:28.889 22:20:25 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.264 Initializing NVMe Controllers 00:20:30.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:30.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:30.264 Initialization complete. Launching workers. 00:20:30.264 ======================================================== 00:20:30.264 Latency(us) 00:20:30.264 Device Information : IOPS MiB/s Average min max 00:20:30.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3156.98 12.33 316.48 112.42 4263.62 00:20:30.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8063.49 7011.18 12025.81 00:20:30.264 ======================================================== 00:20:30.264 Total : 3281.98 12.82 611.53 112.42 12025.81 00:20:30.264 00:20:30.264 22:20:26 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:31.637 Initializing NVMe Controllers 00:20:31.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:31.637 Initialization complete. Launching workers. 00:20:31.637 ======================================================== 00:20:31.637 Latency(us) 00:20:31.637 Device Information : IOPS MiB/s Average min max 00:20:31.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9896.99 38.66 3233.76 559.95 6467.13 00:20:31.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2678.00 10.46 12040.95 6907.40 20188.55 00:20:31.637 ======================================================== 00:20:31.637 Total : 12574.99 49.12 5109.36 559.95 20188.55 00:20:31.637 00:20:31.637 22:20:27 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:31.637 22:20:27 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.166 Initializing NVMe Controllers 00:20:34.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.166 Controller IO queue size 128, less than required. 00:20:34.166 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.166 Controller IO queue size 128, less than required. 00:20:34.166 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:34.166 Initialization complete. Launching workers. 00:20:34.166 ======================================================== 00:20:34.166 Latency(us) 00:20:34.166 Device Information : IOPS MiB/s Average min max 00:20:34.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1861.39 465.35 69495.13 46336.92 126994.35 00:20:34.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 631.46 157.87 214534.83 66518.87 345233.82 00:20:34.166 ======================================================== 00:20:34.166 Total : 2492.86 623.21 106235.03 46336.92 345233.82 00:20:34.166 00:20:34.166 22:20:30 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:34.166 No valid NVMe controllers or AIO or URING devices found 00:20:34.166 Initializing NVMe Controllers 00:20:34.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.166 Controller IO queue size 128, less than required. 00:20:34.166 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.166 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:34.166 Controller IO queue size 128, less than required. 00:20:34.166 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.166 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:34.166 WARNING: Some requested NVMe devices were skipped 00:20:34.166 22:20:30 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:36.699 Initializing NVMe Controllers 00:20:36.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.699 Controller IO queue size 128, less than required. 00:20:36.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.699 Controller IO queue size 128, less than required. 00:20:36.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:36.699 Initialization complete. Launching workers. 00:20:36.699 00:20:36.699 ==================== 00:20:36.699 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:36.699 TCP transport: 00:20:36.699 polls: 11458 00:20:36.699 idle_polls: 9039 00:20:36.699 sock_completions: 2419 00:20:36.699 nvme_completions: 4812 00:20:36.699 submitted_requests: 7370 00:20:36.699 queued_requests: 1 00:20:36.699 00:20:36.699 ==================== 00:20:36.699 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:36.699 TCP transport: 00:20:36.699 polls: 11515 00:20:36.699 idle_polls: 8969 00:20:36.699 sock_completions: 2546 00:20:36.699 nvme_completions: 5141 00:20:36.699 submitted_requests: 7963 00:20:36.699 queued_requests: 1 00:20:36.699 ======================================================== 00:20:36.699 Latency(us) 00:20:36.699 Device Information : IOPS MiB/s Average min max 00:20:36.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1263.73 315.93 103266.56 71812.14 177452.91 00:20:36.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1345.55 336.39 96130.03 41457.19 122483.53 00:20:36.699 ======================================================== 00:20:36.699 Total : 2609.27 652.32 99586.41 41457.19 177452.91 00:20:36.699 00:20:36.699 22:20:33 -- host/perf.sh@66 -- # sync 00:20:36.699 22:20:33 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.957 22:20:33 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:36.957 22:20:33 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:36.957 22:20:33 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:37.216 22:20:33 -- host/perf.sh@72 -- # ls_guid=4eb3c936-f342-47d6-97bb-259018f4a312 00:20:37.216 22:20:33 -- host/perf.sh@73 -- # get_lvs_free_mb 4eb3c936-f342-47d6-97bb-259018f4a312 00:20:37.216 22:20:33 -- common/autotest_common.sh@1353 -- # local lvs_uuid=4eb3c936-f342-47d6-97bb-259018f4a312 00:20:37.216 22:20:33 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:37.216 22:20:33 -- common/autotest_common.sh@1355 -- # local fc 00:20:37.216 22:20:33 -- common/autotest_common.sh@1356 -- # local cs 00:20:37.216 22:20:33 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:37.782 22:20:34 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:37.782 { 00:20:37.782 "base_bdev": "Nvme0n1", 00:20:37.782 "block_size": 4096, 00:20:37.782 "cluster_size": 4194304, 00:20:37.782 "free_clusters": 1278, 00:20:37.782 "name": "lvs_0", 00:20:37.782 "total_data_clusters": 1278, 00:20:37.782 "uuid": "4eb3c936-f342-47d6-97bb-259018f4a312" 00:20:37.782 } 00:20:37.782 ]' 00:20:37.782 22:20:34 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="4eb3c936-f342-47d6-97bb-259018f4a312") .free_clusters' 00:20:37.782 22:20:34 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:37.782 22:20:34 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="4eb3c936-f342-47d6-97bb-259018f4a312") .cluster_size' 00:20:37.782 5112 00:20:37.782 22:20:34 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:37.782 22:20:34 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:37.782 22:20:34 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:37.782 22:20:34 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:37.782 22:20:34 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4eb3c936-f342-47d6-97bb-259018f4a312 lbd_0 5112 00:20:38.042 22:20:34 -- host/perf.sh@80 -- # lb_guid=10f74016-fed7-46aa-91b4-3f4922b47d63 00:20:38.042 22:20:34 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 10f74016-fed7-46aa-91b4-3f4922b47d63 lvs_n_0 00:20:38.313 22:20:34 -- host/perf.sh@83 -- # ls_nested_guid=131ade7a-18ae-4bf9-a1a1-0a56225e1379 00:20:38.313 22:20:34 -- host/perf.sh@84 -- # get_lvs_free_mb 131ade7a-18ae-4bf9-a1a1-0a56225e1379 00:20:38.313 22:20:34 -- common/autotest_common.sh@1353 -- # local lvs_uuid=131ade7a-18ae-4bf9-a1a1-0a56225e1379 00:20:38.313 22:20:34 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:38.313 22:20:34 -- common/autotest_common.sh@1355 -- # local fc 00:20:38.313 22:20:34 -- common/autotest_common.sh@1356 -- # local cs 00:20:38.313 22:20:34 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:38.599 22:20:35 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:38.599 { 00:20:38.599 "base_bdev": "Nvme0n1", 00:20:38.599 "block_size": 4096, 00:20:38.599 "cluster_size": 4194304, 00:20:38.599 "free_clusters": 0, 00:20:38.599 "name": "lvs_0", 00:20:38.599 "total_data_clusters": 1278, 00:20:38.599 "uuid": "4eb3c936-f342-47d6-97bb-259018f4a312" 00:20:38.599 }, 00:20:38.599 { 00:20:38.599 "base_bdev": "10f74016-fed7-46aa-91b4-3f4922b47d63", 00:20:38.599 "block_size": 4096, 00:20:38.599 "cluster_size": 4194304, 00:20:38.599 "free_clusters": 1276, 00:20:38.599 "name": "lvs_n_0", 00:20:38.599 "total_data_clusters": 1276, 00:20:38.599 "uuid": "131ade7a-18ae-4bf9-a1a1-0a56225e1379" 00:20:38.599 } 00:20:38.599 ]' 00:20:38.599 22:20:35 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="131ade7a-18ae-4bf9-a1a1-0a56225e1379") .free_clusters' 00:20:38.599 22:20:35 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:38.599 22:20:35 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="131ade7a-18ae-4bf9-a1a1-0a56225e1379") .cluster_size' 00:20:38.599 5104 00:20:38.599 22:20:35 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:38.599 22:20:35 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:38.599 22:20:35 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:38.599 22:20:35 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:38.599 22:20:35 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 131ade7a-18ae-4bf9-a1a1-0a56225e1379 lbd_nest_0 5104 00:20:38.868 22:20:35 -- host/perf.sh@88 -- # lb_nested_guid=2ae4b7d2-2484-4d76-a946-95b467754962 00:20:38.869 22:20:35 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.126 22:20:35 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:39.126 22:20:35 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2ae4b7d2-2484-4d76-a946-95b467754962 00:20:39.385 22:20:35 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.643 22:20:36 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:39.643 22:20:36 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:39.643 22:20:36 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:39.643 22:20:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:39.643 22:20:36 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:39.901 No valid NVMe controllers or AIO or URING devices found 00:20:39.901 Initializing NVMe Controllers 00:20:39.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.901 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:39.901 WARNING: Some requested NVMe devices were skipped 00:20:39.901 22:20:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:39.901 22:20:36 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.103 Initializing NVMe Controllers 00:20:52.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.103 Initialization complete. Launching workers. 00:20:52.103 ======================================================== 00:20:52.103 Latency(us) 00:20:52.103 Device Information : IOPS MiB/s Average min max 00:20:52.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 819.53 102.44 1218.87 404.52 8144.11 00:20:52.103 ======================================================== 00:20:52.103 Total : 819.53 102.44 1218.87 404.52 8144.11 00:20:52.103 00:20:52.103 22:20:46 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:52.103 22:20:46 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:52.103 22:20:46 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.103 No valid NVMe controllers or AIO or URING devices found 00:20:52.103 Initializing NVMe Controllers 00:20:52.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.103 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:52.103 WARNING: Some requested NVMe devices were skipped 00:20:52.103 22:20:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:52.103 22:20:47 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.072 Initializing NVMe Controllers 00:21:02.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.072 Initialization complete. Launching workers. 00:21:02.072 ======================================================== 00:21:02.072 Latency(us) 00:21:02.072 Device Information : IOPS MiB/s Average min max 00:21:02.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1035.80 129.47 30936.70 7822.44 253737.77 00:21:02.072 ======================================================== 00:21:02.072 Total : 1035.80 129.47 30936.70 7822.44 253737.77 00:21:02.072 00:21:02.072 22:20:57 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:02.072 22:20:57 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:02.072 22:20:57 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.072 No valid NVMe controllers or AIO or URING devices found 00:21:02.072 Initializing NVMe Controllers 00:21:02.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.072 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:02.072 WARNING: Some requested NVMe devices were skipped 00:21:02.072 22:20:57 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:02.072 22:20:57 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.055 Initializing NVMe Controllers 00:21:12.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.055 Controller IO queue size 128, less than required. 00:21:12.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:12.055 Initialization complete. Launching workers. 00:21:12.055 ======================================================== 00:21:12.055 Latency(us) 00:21:12.055 Device Information : IOPS MiB/s Average min max 00:21:12.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3695.10 461.89 34703.62 13124.28 69440.09 00:21:12.055 ======================================================== 00:21:12.055 Total : 3695.10 461.89 34703.62 13124.28 69440.09 00:21:12.055 00:21:12.055 22:21:07 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.055 22:21:08 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2ae4b7d2-2484-4d76-a946-95b467754962 00:21:12.055 22:21:08 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:12.314 22:21:08 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 10f74016-fed7-46aa-91b4-3f4922b47d63 00:21:12.574 22:21:08 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:12.832 22:21:09 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:12.832 22:21:09 -- host/perf.sh@114 -- # nvmftestfini 00:21:12.832 22:21:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:12.832 22:21:09 -- nvmf/common.sh@116 -- # sync 00:21:12.832 22:21:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:12.832 22:21:09 -- nvmf/common.sh@119 -- # set +e 00:21:12.832 22:21:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:12.832 22:21:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:12.832 rmmod nvme_tcp 00:21:12.832 rmmod nvme_fabrics 00:21:12.832 rmmod nvme_keyring 00:21:12.832 22:21:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:12.832 22:21:09 -- nvmf/common.sh@123 -- # set -e 00:21:12.832 22:21:09 -- nvmf/common.sh@124 -- # return 0 00:21:12.832 22:21:09 -- nvmf/common.sh@477 -- # '[' -n 83105 ']' 00:21:12.832 22:21:09 -- nvmf/common.sh@478 -- # killprocess 83105 00:21:12.832 22:21:09 -- common/autotest_common.sh@936 -- # '[' -z 83105 ']' 00:21:12.832 22:21:09 -- common/autotest_common.sh@940 -- # kill -0 83105 00:21:12.832 22:21:09 -- common/autotest_common.sh@941 -- # uname 00:21:12.832 22:21:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:12.832 22:21:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83105 00:21:12.832 22:21:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:12.832 22:21:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:12.832 killing process with pid 83105 00:21:12.832 22:21:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83105' 00:21:12.832 22:21:09 -- common/autotest_common.sh@955 -- # kill 83105 00:21:12.832 22:21:09 -- common/autotest_common.sh@960 -- # wait 83105 00:21:14.209 22:21:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:14.209 22:21:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:14.209 22:21:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:14.209 22:21:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.209 22:21:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:14.209 22:21:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.209 22:21:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.209 22:21:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.209 22:21:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:14.209 ************************************ 00:21:14.209 END TEST nvmf_perf 00:21:14.209 ************************************ 00:21:14.209 00:21:14.209 real 0m50.531s 00:21:14.209 user 3m10.711s 00:21:14.209 sys 0m10.411s 00:21:14.209 22:21:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:14.209 22:21:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.209 22:21:10 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:14.209 22:21:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:14.209 22:21:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:14.209 22:21:10 -- common/autotest_common.sh@10 -- # set +x 00:21:14.209 ************************************ 00:21:14.209 START TEST nvmf_fio_host 00:21:14.209 ************************************ 00:21:14.209 22:21:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:14.468 * Looking for test storage... 00:21:14.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:14.468 22:21:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:14.468 22:21:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:14.468 22:21:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:14.468 22:21:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:14.468 22:21:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:14.468 22:21:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:14.468 22:21:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:14.468 22:21:10 -- scripts/common.sh@335 -- # IFS=.-: 00:21:14.468 22:21:10 -- scripts/common.sh@335 -- # read -ra ver1 00:21:14.468 22:21:10 -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.468 22:21:10 -- scripts/common.sh@336 -- # read -ra ver2 00:21:14.468 22:21:10 -- scripts/common.sh@337 -- # local 'op=<' 00:21:14.468 22:21:10 -- scripts/common.sh@339 -- # ver1_l=2 00:21:14.468 22:21:10 -- scripts/common.sh@340 -- # ver2_l=1 00:21:14.468 22:21:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:14.468 22:21:10 -- scripts/common.sh@343 -- # case "$op" in 00:21:14.468 22:21:10 -- scripts/common.sh@344 -- # : 1 00:21:14.468 22:21:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:14.468 22:21:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.468 22:21:10 -- scripts/common.sh@364 -- # decimal 1 00:21:14.468 22:21:10 -- scripts/common.sh@352 -- # local d=1 00:21:14.468 22:21:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.468 22:21:10 -- scripts/common.sh@354 -- # echo 1 00:21:14.468 22:21:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:14.468 22:21:10 -- scripts/common.sh@365 -- # decimal 2 00:21:14.468 22:21:10 -- scripts/common.sh@352 -- # local d=2 00:21:14.468 22:21:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.468 22:21:10 -- scripts/common.sh@354 -- # echo 2 00:21:14.468 22:21:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:14.468 22:21:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:14.468 22:21:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:14.468 22:21:10 -- scripts/common.sh@367 -- # return 0 00:21:14.468 22:21:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.468 22:21:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:14.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.468 --rc genhtml_branch_coverage=1 00:21:14.468 --rc genhtml_function_coverage=1 00:21:14.468 --rc genhtml_legend=1 00:21:14.468 --rc geninfo_all_blocks=1 00:21:14.468 --rc geninfo_unexecuted_blocks=1 00:21:14.468 00:21:14.468 ' 00:21:14.468 22:21:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:14.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.468 --rc genhtml_branch_coverage=1 00:21:14.468 --rc genhtml_function_coverage=1 00:21:14.468 --rc genhtml_legend=1 00:21:14.468 --rc geninfo_all_blocks=1 00:21:14.468 --rc geninfo_unexecuted_blocks=1 00:21:14.468 00:21:14.469 ' 00:21:14.469 22:21:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:14.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.469 --rc genhtml_branch_coverage=1 00:21:14.469 --rc genhtml_function_coverage=1 00:21:14.469 --rc genhtml_legend=1 00:21:14.469 --rc geninfo_all_blocks=1 00:21:14.469 --rc geninfo_unexecuted_blocks=1 00:21:14.469 00:21:14.469 ' 00:21:14.469 22:21:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:14.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.469 --rc genhtml_branch_coverage=1 00:21:14.469 --rc genhtml_function_coverage=1 00:21:14.469 --rc genhtml_legend=1 00:21:14.469 --rc geninfo_all_blocks=1 00:21:14.469 --rc geninfo_unexecuted_blocks=1 00:21:14.469 00:21:14.469 ' 00:21:14.469 22:21:10 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.469 22:21:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.469 22:21:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.469 22:21:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.469 22:21:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.469 22:21:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.469 22:21:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.469 22:21:10 -- paths/export.sh@5 -- # export PATH 00:21:14.469 22:21:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.469 22:21:10 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:14.469 22:21:10 -- nvmf/common.sh@7 -- # uname -s 00:21:14.469 22:21:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.469 22:21:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.469 22:21:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.469 22:21:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.469 22:21:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.469 22:21:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.469 22:21:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.469 22:21:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.469 22:21:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.469 22:21:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.469 22:21:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:21:14.469 22:21:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:21:14.469 22:21:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.469 22:21:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.469 22:21:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:14.469 22:21:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.469 22:21:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.469 22:21:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.469 22:21:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.469 22:21:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.469 22:21:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.469 22:21:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.469 22:21:10 -- paths/export.sh@5 -- # export PATH 00:21:14.469 22:21:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.469 22:21:10 -- nvmf/common.sh@46 -- # : 0 00:21:14.469 22:21:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:14.469 22:21:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:14.469 22:21:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:14.469 22:21:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.469 22:21:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.469 22:21:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:14.469 22:21:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:14.469 22:21:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:14.469 22:21:10 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:14.469 22:21:10 -- host/fio.sh@14 -- # nvmftestinit 00:21:14.469 22:21:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:14.469 22:21:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.469 22:21:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:14.469 22:21:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:14.469 22:21:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:14.469 22:21:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.469 22:21:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.469 22:21:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.469 22:21:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:14.469 22:21:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:14.469 22:21:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:14.469 22:21:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:14.469 22:21:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:14.469 22:21:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:14.469 22:21:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.469 22:21:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.469 22:21:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:14.469 22:21:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:14.469 22:21:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:14.469 22:21:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:14.469 22:21:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:14.469 22:21:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.469 22:21:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:14.469 22:21:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:14.469 22:21:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:14.469 22:21:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:14.469 22:21:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:14.469 22:21:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:14.469 Cannot find device "nvmf_tgt_br" 00:21:14.469 22:21:11 -- nvmf/common.sh@154 -- # true 00:21:14.469 22:21:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:14.469 Cannot find device "nvmf_tgt_br2" 00:21:14.469 22:21:11 -- nvmf/common.sh@155 -- # true 00:21:14.469 22:21:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:14.469 22:21:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:14.469 Cannot find device "nvmf_tgt_br" 00:21:14.469 22:21:11 -- nvmf/common.sh@157 -- # true 00:21:14.469 22:21:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:14.469 Cannot find device "nvmf_tgt_br2" 00:21:14.469 22:21:11 -- nvmf/common.sh@158 -- # true 00:21:14.469 22:21:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:14.728 22:21:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:14.728 22:21:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:14.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.728 22:21:11 -- nvmf/common.sh@161 -- # true 00:21:14.728 22:21:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:14.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.728 22:21:11 -- nvmf/common.sh@162 -- # true 00:21:14.728 22:21:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:14.728 22:21:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:14.728 22:21:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:14.728 22:21:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:14.728 22:21:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:14.728 22:21:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:14.728 22:21:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:14.728 22:21:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:14.728 22:21:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:14.728 22:21:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:14.728 22:21:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:14.728 22:21:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:14.728 22:21:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:14.728 22:21:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:14.728 22:21:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:14.728 22:21:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:14.728 22:21:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:14.728 22:21:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:14.728 22:21:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:14.728 22:21:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:14.728 22:21:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:14.728 22:21:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:14.728 22:21:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:14.728 22:21:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:14.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:21:14.728 00:21:14.728 --- 10.0.0.2 ping statistics --- 00:21:14.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.728 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:21:14.728 22:21:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:14.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:14.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:21:14.728 00:21:14.728 --- 10.0.0.3 ping statistics --- 00:21:14.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.728 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:14.728 22:21:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:14.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:14.728 00:21:14.728 --- 10.0.0.1 ping statistics --- 00:21:14.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.728 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:14.728 22:21:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.728 22:21:11 -- nvmf/common.sh@421 -- # return 0 00:21:14.728 22:21:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:14.728 22:21:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.728 22:21:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:14.728 22:21:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:14.728 22:21:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.728 22:21:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:14.728 22:21:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:14.728 22:21:11 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:14.728 22:21:11 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:14.728 22:21:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.728 22:21:11 -- common/autotest_common.sh@10 -- # set +x 00:21:14.728 22:21:11 -- host/fio.sh@24 -- # nvmfpid=84071 00:21:14.728 22:21:11 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:14.728 22:21:11 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:14.728 22:21:11 -- host/fio.sh@28 -- # waitforlisten 84071 00:21:14.728 22:21:11 -- common/autotest_common.sh@829 -- # '[' -z 84071 ']' 00:21:14.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.728 22:21:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.728 22:21:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.728 22:21:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.728 22:21:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.728 22:21:11 -- common/autotest_common.sh@10 -- # set +x 00:21:14.987 [2024-11-17 22:21:11.366823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:14.987 [2024-11-17 22:21:11.367046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.987 [2024-11-17 22:21:11.503151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.246 [2024-11-17 22:21:11.613316] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:15.246 [2024-11-17 22:21:11.613856] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.246 [2024-11-17 22:21:11.613888] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.246 [2024-11-17 22:21:11.613901] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.246 [2024-11-17 22:21:11.614038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.246 [2024-11-17 22:21:11.614262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.246 [2024-11-17 22:21:11.614418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.246 [2024-11-17 22:21:11.614423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.814 22:21:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.814 22:21:12 -- common/autotest_common.sh@862 -- # return 0 00:21:15.814 22:21:12 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:16.073 [2024-11-17 22:21:12.639429] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.073 22:21:12 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:16.073 22:21:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.073 22:21:12 -- common/autotest_common.sh@10 -- # set +x 00:21:16.332 22:21:12 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:16.591 Malloc1 00:21:16.591 22:21:13 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.850 22:21:13 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:17.109 22:21:13 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.109 [2024-11-17 22:21:13.706276] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.367 22:21:13 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:17.626 22:21:13 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:17.626 22:21:13 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.626 22:21:13 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.626 22:21:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:17.626 22:21:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:17.626 22:21:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:17.626 22:21:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.626 22:21:13 -- common/autotest_common.sh@1330 -- # shift 00:21:17.626 22:21:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:17.626 22:21:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.626 22:21:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.626 22:21:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:17.627 22:21:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:17.627 22:21:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:17.627 22:21:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:17.627 22:21:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.627 22:21:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.627 22:21:14 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:17.627 22:21:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:17.627 22:21:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:17.627 22:21:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:17.627 22:21:14 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:17.627 22:21:14 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.627 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:17.627 fio-3.35 00:21:17.627 Starting 1 thread 00:21:20.161 00:21:20.161 test: (groupid=0, jobs=1): err= 0: pid=84202: Sun Nov 17 22:21:16 2024 00:21:20.161 read: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(80.4MiB/2006msec) 00:21:20.161 slat (nsec): min=1706, max=333785, avg=2241.19, stdev=3249.18 00:21:20.161 clat (usec): min=3612, max=11447, avg=6587.77, stdev=580.69 00:21:20.161 lat (usec): min=3645, max=11449, avg=6590.01, stdev=580.64 00:21:20.161 clat percentiles (usec): 00:21:20.161 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:21:20.161 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:21:20.161 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7308], 95.00th=[ 7570], 00:21:20.161 | 99.00th=[ 8356], 99.50th=[ 8717], 99.90th=[10159], 99.95th=[10421], 00:21:20.161 | 99.99th=[11076] 00:21:20.161 bw ( KiB/s): min=39792, max=41776, per=100.00%, avg=41068.00, stdev=880.06, samples=4 00:21:20.161 iops : min= 9948, max=10444, avg=10267.00, stdev=220.02, samples=4 00:21:20.161 write: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(80.5MiB/2006msec); 0 zone resets 00:21:20.161 slat (nsec): min=1788, max=363025, avg=2375.76, stdev=2923.58 00:21:20.161 clat (usec): min=2616, max=11055, avg=5801.81, stdev=491.86 00:21:20.161 lat (usec): min=2630, max=11057, avg=5804.19, stdev=491.87 00:21:20.161 clat percentiles (usec): 00:21:20.161 | 1.00th=[ 4817], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5407], 00:21:20.161 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:21:20.161 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6521], 00:21:20.161 | 99.00th=[ 7308], 99.50th=[ 7832], 99.90th=[ 8979], 99.95th=[10421], 00:21:20.161 | 99.99th=[10945] 00:21:20.161 bw ( KiB/s): min=40400, max=42088, per=99.98%, avg=41088.00, stdev=787.72, samples=4 00:21:20.161 iops : min=10100, max=10522, avg=10272.00, stdev=196.93, samples=4 00:21:20.161 lat (msec) : 4=0.06%, 10=99.84%, 20=0.10% 00:21:20.161 cpu : usr=65.94%, sys=24.99%, ctx=545, majf=0, minf=5 00:21:20.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:20.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:20.161 issued rwts: total=20595,20610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:20.161 00:21:20.161 Run status group 0 (all jobs): 00:21:20.161 READ: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=80.4MiB (84.4MB), run=2006-2006msec 00:21:20.161 WRITE: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=80.5MiB (84.4MB), run=2006-2006msec 00:21:20.161 22:21:16 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:20.161 22:21:16 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:20.161 22:21:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:20.161 22:21:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.161 22:21:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:20.161 22:21:16 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:20.161 22:21:16 -- common/autotest_common.sh@1330 -- # shift 00:21:20.161 22:21:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:20.161 22:21:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.161 22:21:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:20.161 22:21:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:20.161 22:21:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:20.161 22:21:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:20.161 22:21:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:20.161 22:21:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.161 22:21:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:20.161 22:21:16 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:20.161 22:21:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:20.161 22:21:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:20.161 22:21:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:20.161 22:21:16 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:20.161 22:21:16 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:20.161 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:20.161 fio-3.35 00:21:20.161 Starting 1 thread 00:21:22.700 00:21:22.700 test: (groupid=0, jobs=1): err= 0: pid=84251: Sun Nov 17 22:21:18 2024 00:21:22.700 read: IOPS=8856, BW=138MiB/s (145MB/s)(278MiB/2006msec) 00:21:22.700 slat (usec): min=2, max=116, avg= 3.53, stdev= 2.50 00:21:22.700 clat (usec): min=2055, max=16089, avg=8586.57, stdev=2161.54 00:21:22.700 lat (usec): min=2058, max=16094, avg=8590.10, stdev=2161.86 00:21:22.700 clat percentiles (usec): 00:21:22.700 | 1.00th=[ 4490], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6652], 00:21:22.700 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 9110], 00:21:22.700 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11207], 95.00th=[12518], 00:21:22.700 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15664], 99.95th=[15795], 00:21:22.700 | 99.99th=[15926] 00:21:22.700 bw ( KiB/s): min=67776, max=76192, per=50.24%, avg=71192.00, stdev=3825.16, samples=4 00:21:22.700 iops : min= 4236, max= 4762, avg=4449.50, stdev=239.07, samples=4 00:21:22.700 write: IOPS=5214, BW=81.5MiB/s (85.4MB/s)(145MiB/1779msec); 0 zone resets 00:21:22.700 slat (usec): min=29, max=350, avg=34.18, stdev= 9.52 00:21:22.700 clat (usec): min=3263, max=18411, avg=10362.04, stdev=1895.44 00:21:22.700 lat (usec): min=3296, max=18457, avg=10396.23, stdev=1898.00 00:21:22.700 clat percentiles (usec): 00:21:22.700 | 1.00th=[ 6915], 5.00th=[ 7767], 10.00th=[ 8291], 20.00th=[ 8848], 00:21:22.700 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:21:22.700 | 70.00th=[10945], 80.00th=[11731], 90.00th=[13042], 95.00th=[14091], 00:21:22.700 | 99.00th=[15664], 99.50th=[16188], 99.90th=[17957], 99.95th=[18220], 00:21:22.700 | 99.99th=[18482] 00:21:22.700 bw ( KiB/s): min=70944, max=79424, per=88.95%, avg=74208.00, stdev=4000.98, samples=4 00:21:22.700 iops : min= 4434, max= 4964, avg=4638.00, stdev=250.06, samples=4 00:21:22.700 lat (msec) : 4=0.37%, 10=64.92%, 20=34.71% 00:21:22.700 cpu : usr=69.98%, sys=18.65%, ctx=4, majf=0, minf=1 00:21:22.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:22.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.700 issued rwts: total=17766,9276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.700 00:21:22.700 Run status group 0 (all jobs): 00:21:22.700 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=278MiB (291MB), run=2006-2006msec 00:21:22.700 WRITE: bw=81.5MiB/s (85.4MB/s), 81.5MiB/s-81.5MiB/s (85.4MB/s-85.4MB/s), io=145MiB (152MB), run=1779-1779msec 00:21:22.700 22:21:18 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.700 22:21:19 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:22.700 22:21:19 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:22.700 22:21:19 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:22.700 22:21:19 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:22.700 22:21:19 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:22.700 22:21:19 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:22.700 22:21:19 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:22.700 22:21:19 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:22.700 22:21:19 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:22.701 22:21:19 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:22.701 22:21:19 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:22.960 Nvme0n1 00:21:22.960 22:21:19 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:23.219 22:21:19 -- host/fio.sh@53 -- # ls_guid=e59dd718-6fee-4853-bcd1-69a6d388aabe 00:21:23.219 22:21:19 -- host/fio.sh@54 -- # get_lvs_free_mb e59dd718-6fee-4853-bcd1-69a6d388aabe 00:21:23.219 22:21:19 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e59dd718-6fee-4853-bcd1-69a6d388aabe 00:21:23.219 22:21:19 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:23.219 22:21:19 -- common/autotest_common.sh@1355 -- # local fc 00:21:23.219 22:21:19 -- common/autotest_common.sh@1356 -- # local cs 00:21:23.478 22:21:19 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:23.736 22:21:20 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:23.736 { 00:21:23.736 "base_bdev": "Nvme0n1", 00:21:23.736 "block_size": 4096, 00:21:23.736 "cluster_size": 1073741824, 00:21:23.736 "free_clusters": 4, 00:21:23.737 "name": "lvs_0", 00:21:23.737 "total_data_clusters": 4, 00:21:23.737 "uuid": "e59dd718-6fee-4853-bcd1-69a6d388aabe" 00:21:23.737 } 00:21:23.737 ]' 00:21:23.737 22:21:20 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e59dd718-6fee-4853-bcd1-69a6d388aabe") .free_clusters' 00:21:23.737 22:21:20 -- common/autotest_common.sh@1358 -- # fc=4 00:21:23.737 22:21:20 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e59dd718-6fee-4853-bcd1-69a6d388aabe") .cluster_size' 00:21:23.737 22:21:20 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:23.737 22:21:20 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:23.737 22:21:20 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:23.737 4096 00:21:23.737 22:21:20 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:23.996 8b7edbd8-6056-4fce-bc20-cd5c2dc18aa3 00:21:23.996 22:21:20 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:24.255 22:21:20 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:24.514 22:21:20 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:24.514 22:21:21 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.514 22:21:21 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.514 22:21:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:24.514 22:21:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:24.514 22:21:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:24.514 22:21:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.514 22:21:21 -- common/autotest_common.sh@1330 -- # shift 00:21:24.514 22:21:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:24.514 22:21:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.514 22:21:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.514 22:21:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:24.514 22:21:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:24.514 22:21:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:24.514 22:21:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:24.514 22:21:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.773 22:21:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.773 22:21:21 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:24.773 22:21:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:24.773 22:21:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:24.773 22:21:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:24.773 22:21:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:24.773 22:21:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.773 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:24.773 fio-3.35 00:21:24.773 Starting 1 thread 00:21:27.309 00:21:27.309 test: (groupid=0, jobs=1): err= 0: pid=84404: Sun Nov 17 22:21:23 2024 00:21:27.309 read: IOPS=6565, BW=25.6MiB/s (26.9MB/s)(51.5MiB/2008msec) 00:21:27.309 slat (nsec): min=1752, max=353559, avg=3092.75, stdev=5404.86 00:21:27.309 clat (usec): min=3927, max=17433, avg=10430.63, stdev=1043.53 00:21:27.309 lat (usec): min=3937, max=17436, avg=10433.72, stdev=1043.32 00:21:27.309 clat percentiles (usec): 00:21:27.309 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9634], 00:21:27.309 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:21:27.309 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:21:27.309 | 99.00th=[12911], 99.50th=[13304], 99.90th=[16319], 99.95th=[16712], 00:21:27.309 | 99.99th=[17433] 00:21:27.309 bw ( KiB/s): min=25568, max=26792, per=99.87%, avg=26230.00, stdev=504.54, samples=4 00:21:27.309 iops : min= 6392, max= 6698, avg=6557.50, stdev=126.14, samples=4 00:21:27.309 write: IOPS=6574, BW=25.7MiB/s (26.9MB/s)(51.6MiB/2008msec); 0 zone resets 00:21:27.309 slat (nsec): min=1862, max=249994, avg=3244.08, stdev=4229.53 00:21:27.309 clat (usec): min=2661, max=17516, avg=8973.82, stdev=878.33 00:21:27.309 lat (usec): min=2674, max=17518, avg=8977.07, stdev=878.20 00:21:27.309 clat percentiles (usec): 00:21:27.309 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8291], 00:21:27.309 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:21:27.309 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10290], 00:21:27.309 | 99.00th=[10945], 99.50th=[11207], 99.90th=[14222], 99.95th=[15533], 00:21:27.309 | 99.99th=[16057] 00:21:27.309 bw ( KiB/s): min=25728, max=26592, per=99.96%, avg=26288.00, stdev=402.07, samples=4 00:21:27.309 iops : min= 6432, max= 6648, avg=6572.00, stdev=100.52, samples=4 00:21:27.309 lat (msec) : 4=0.04%, 10=61.84%, 20=38.13% 00:21:27.309 cpu : usr=67.96%, sys=22.57%, ctx=23, majf=0, minf=5 00:21:27.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:27.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:27.310 issued rwts: total=13184,13202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:27.310 00:21:27.310 Run status group 0 (all jobs): 00:21:27.310 READ: bw=25.6MiB/s (26.9MB/s), 25.6MiB/s-25.6MiB/s (26.9MB/s-26.9MB/s), io=51.5MiB (54.0MB), run=2008-2008msec 00:21:27.310 WRITE: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=51.6MiB (54.1MB), run=2008-2008msec 00:21:27.310 22:21:23 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:27.310 22:21:23 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:27.568 22:21:24 -- host/fio.sh@64 -- # ls_nested_guid=e03e24c4-0bdd-4b23-8c73-4bf2cea8b762 00:21:27.568 22:21:24 -- host/fio.sh@65 -- # get_lvs_free_mb e03e24c4-0bdd-4b23-8c73-4bf2cea8b762 00:21:27.568 22:21:24 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e03e24c4-0bdd-4b23-8c73-4bf2cea8b762 00:21:27.568 22:21:24 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:27.568 22:21:24 -- common/autotest_common.sh@1355 -- # local fc 00:21:27.568 22:21:24 -- common/autotest_common.sh@1356 -- # local cs 00:21:27.568 22:21:24 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:27.827 22:21:24 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:27.827 { 00:21:27.827 "base_bdev": "Nvme0n1", 00:21:27.827 "block_size": 4096, 00:21:27.827 "cluster_size": 1073741824, 00:21:27.827 "free_clusters": 0, 00:21:27.827 "name": "lvs_0", 00:21:27.827 "total_data_clusters": 4, 00:21:27.827 "uuid": "e59dd718-6fee-4853-bcd1-69a6d388aabe" 00:21:27.827 }, 00:21:27.827 { 00:21:27.827 "base_bdev": "8b7edbd8-6056-4fce-bc20-cd5c2dc18aa3", 00:21:27.827 "block_size": 4096, 00:21:27.827 "cluster_size": 4194304, 00:21:27.827 "free_clusters": 1022, 00:21:27.827 "name": "lvs_n_0", 00:21:27.827 "total_data_clusters": 1022, 00:21:27.827 "uuid": "e03e24c4-0bdd-4b23-8c73-4bf2cea8b762" 00:21:27.827 } 00:21:27.827 ]' 00:21:27.827 22:21:24 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e03e24c4-0bdd-4b23-8c73-4bf2cea8b762") .free_clusters' 00:21:27.827 22:21:24 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:27.827 22:21:24 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e03e24c4-0bdd-4b23-8c73-4bf2cea8b762") .cluster_size' 00:21:27.827 22:21:24 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:27.827 22:21:24 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:27.827 4088 00:21:27.827 22:21:24 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:27.827 22:21:24 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:28.086 e1e3116e-e361-44e1-9104-1a5c7a8c46fe 00:21:28.086 22:21:24 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:28.345 22:21:24 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:28.604 22:21:25 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:28.865 22:21:25 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:28.865 22:21:25 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:28.865 22:21:25 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:28.865 22:21:25 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:28.865 22:21:25 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:28.865 22:21:25 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:28.865 22:21:25 -- common/autotest_common.sh@1330 -- # shift 00:21:28.865 22:21:25 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:28.865 22:21:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:28.865 22:21:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:28.865 22:21:25 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:28.865 22:21:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:28.865 22:21:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:28.865 22:21:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:28.865 22:21:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:28.865 22:21:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:28.865 22:21:25 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:28.865 22:21:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:28.865 22:21:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:28.865 22:21:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:28.865 22:21:25 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:28.865 22:21:25 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:28.865 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:28.865 fio-3.35 00:21:28.865 Starting 1 thread 00:21:31.456 00:21:31.456 test: (groupid=0, jobs=1): err= 0: pid=84520: Sun Nov 17 22:21:27 2024 00:21:31.456 read: IOPS=5860, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec) 00:21:31.456 slat (nsec): min=1838, max=348822, avg=3207.49, stdev=5361.57 00:21:31.456 clat (usec): min=4564, max=19452, avg=11716.99, stdev=1212.40 00:21:31.456 lat (usec): min=4572, max=19455, avg=11720.20, stdev=1212.25 00:21:31.456 clat percentiles (usec): 00:21:31.456 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:21:31.456 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:21:31.456 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13304], 95.00th=[13698], 00:21:31.456 | 99.00th=[14615], 99.50th=[15008], 99.90th=[17695], 99.95th=[18744], 00:21:31.456 | 99.99th=[19268] 00:21:31.456 bw ( KiB/s): min=22672, max=24000, per=99.93%, avg=23424.00, stdev=587.59, samples=4 00:21:31.456 iops : min= 5668, max= 6000, avg=5856.00, stdev=146.90, samples=4 00:21:31.456 write: IOPS=5851, BW=22.9MiB/s (24.0MB/s)(45.9MiB/2009msec); 0 zone resets 00:21:31.456 slat (nsec): min=1940, max=332898, avg=3287.59, stdev=4550.91 00:21:31.456 clat (usec): min=2478, max=19157, avg=10057.08, stdev=1023.56 00:21:31.456 lat (usec): min=2490, max=19159, avg=10060.37, stdev=1023.51 00:21:31.456 clat percentiles (usec): 00:21:31.456 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:21:31.456 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:21:31.456 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:21:31.456 | 99.00th=[12387], 99.50th=[12780], 99.90th=[17433], 99.95th=[17957], 00:21:31.456 | 99.99th=[19006] 00:21:31.456 bw ( KiB/s): min=23040, max=23560, per=99.89%, avg=23378.00, stdev=232.08, samples=4 00:21:31.456 iops : min= 5760, max= 5890, avg=5844.50, stdev=58.02, samples=4 00:21:31.456 lat (msec) : 4=0.04%, 10=27.17%, 20=72.79% 00:21:31.456 cpu : usr=65.89%, sys=25.35%, ctx=5, majf=0, minf=5 00:21:31.456 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:31.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:31.456 issued rwts: total=11773,11755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.456 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:31.456 00:21:31.456 Run status group 0 (all jobs): 00:21:31.456 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.2MB), run=2009-2009msec 00:21:31.456 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=45.9MiB (48.1MB), run=2009-2009msec 00:21:31.456 22:21:27 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:31.456 22:21:27 -- host/fio.sh@74 -- # sync 00:21:31.456 22:21:28 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:31.715 22:21:28 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:31.973 22:21:28 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:32.232 22:21:28 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:32.491 22:21:28 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:32.754 22:21:29 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:32.754 22:21:29 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:32.754 22:21:29 -- host/fio.sh@86 -- # nvmftestfini 00:21:32.754 22:21:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:32.754 22:21:29 -- nvmf/common.sh@116 -- # sync 00:21:32.754 22:21:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:32.754 22:21:29 -- nvmf/common.sh@119 -- # set +e 00:21:32.754 22:21:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:32.754 22:21:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:32.755 rmmod nvme_tcp 00:21:32.755 rmmod nvme_fabrics 00:21:32.755 rmmod nvme_keyring 00:21:32.755 22:21:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:32.755 22:21:29 -- nvmf/common.sh@123 -- # set -e 00:21:32.755 22:21:29 -- nvmf/common.sh@124 -- # return 0 00:21:32.755 22:21:29 -- nvmf/common.sh@477 -- # '[' -n 84071 ']' 00:21:32.755 22:21:29 -- nvmf/common.sh@478 -- # killprocess 84071 00:21:32.755 22:21:29 -- common/autotest_common.sh@936 -- # '[' -z 84071 ']' 00:21:32.755 22:21:29 -- common/autotest_common.sh@940 -- # kill -0 84071 00:21:32.755 22:21:29 -- common/autotest_common.sh@941 -- # uname 00:21:32.755 22:21:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:32.755 22:21:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84071 00:21:32.755 22:21:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:32.755 killing process with pid 84071 00:21:32.755 22:21:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:32.755 22:21:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84071' 00:21:32.755 22:21:29 -- common/autotest_common.sh@955 -- # kill 84071 00:21:32.755 22:21:29 -- common/autotest_common.sh@960 -- # wait 84071 00:21:33.016 22:21:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:33.016 22:21:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:33.016 22:21:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:33.016 22:21:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.016 22:21:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:33.016 22:21:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.016 22:21:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.016 22:21:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.016 22:21:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:33.016 00:21:33.016 real 0m18.805s 00:21:33.016 user 1m22.318s 00:21:33.016 sys 0m4.409s 00:21:33.016 ************************************ 00:21:33.016 END TEST nvmf_fio_host 00:21:33.016 ************************************ 00:21:33.016 22:21:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:33.016 22:21:29 -- common/autotest_common.sh@10 -- # set +x 00:21:33.016 22:21:29 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:33.016 22:21:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:33.016 22:21:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:33.016 22:21:29 -- common/autotest_common.sh@10 -- # set +x 00:21:33.016 ************************************ 00:21:33.016 START TEST nvmf_failover 00:21:33.016 ************************************ 00:21:33.016 22:21:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:33.276 * Looking for test storage... 00:21:33.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:33.276 22:21:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:33.276 22:21:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:33.276 22:21:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:33.276 22:21:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:33.276 22:21:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:33.276 22:21:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:33.276 22:21:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:33.276 22:21:29 -- scripts/common.sh@335 -- # IFS=.-: 00:21:33.276 22:21:29 -- scripts/common.sh@335 -- # read -ra ver1 00:21:33.276 22:21:29 -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.276 22:21:29 -- scripts/common.sh@336 -- # read -ra ver2 00:21:33.276 22:21:29 -- scripts/common.sh@337 -- # local 'op=<' 00:21:33.276 22:21:29 -- scripts/common.sh@339 -- # ver1_l=2 00:21:33.276 22:21:29 -- scripts/common.sh@340 -- # ver2_l=1 00:21:33.276 22:21:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:33.276 22:21:29 -- scripts/common.sh@343 -- # case "$op" in 00:21:33.276 22:21:29 -- scripts/common.sh@344 -- # : 1 00:21:33.276 22:21:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:33.276 22:21:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.276 22:21:29 -- scripts/common.sh@364 -- # decimal 1 00:21:33.276 22:21:29 -- scripts/common.sh@352 -- # local d=1 00:21:33.276 22:21:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.276 22:21:29 -- scripts/common.sh@354 -- # echo 1 00:21:33.276 22:21:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:33.276 22:21:29 -- scripts/common.sh@365 -- # decimal 2 00:21:33.276 22:21:29 -- scripts/common.sh@352 -- # local d=2 00:21:33.276 22:21:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.276 22:21:29 -- scripts/common.sh@354 -- # echo 2 00:21:33.276 22:21:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:33.276 22:21:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:33.276 22:21:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:33.276 22:21:29 -- scripts/common.sh@367 -- # return 0 00:21:33.276 22:21:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.276 22:21:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:33.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.276 --rc genhtml_branch_coverage=1 00:21:33.276 --rc genhtml_function_coverage=1 00:21:33.276 --rc genhtml_legend=1 00:21:33.276 --rc geninfo_all_blocks=1 00:21:33.276 --rc geninfo_unexecuted_blocks=1 00:21:33.276 00:21:33.276 ' 00:21:33.276 22:21:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:33.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.276 --rc genhtml_branch_coverage=1 00:21:33.276 --rc genhtml_function_coverage=1 00:21:33.276 --rc genhtml_legend=1 00:21:33.276 --rc geninfo_all_blocks=1 00:21:33.276 --rc geninfo_unexecuted_blocks=1 00:21:33.276 00:21:33.276 ' 00:21:33.276 22:21:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:33.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.276 --rc genhtml_branch_coverage=1 00:21:33.276 --rc genhtml_function_coverage=1 00:21:33.276 --rc genhtml_legend=1 00:21:33.276 --rc geninfo_all_blocks=1 00:21:33.276 --rc geninfo_unexecuted_blocks=1 00:21:33.276 00:21:33.276 ' 00:21:33.276 22:21:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:33.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.276 --rc genhtml_branch_coverage=1 00:21:33.276 --rc genhtml_function_coverage=1 00:21:33.276 --rc genhtml_legend=1 00:21:33.276 --rc geninfo_all_blocks=1 00:21:33.276 --rc geninfo_unexecuted_blocks=1 00:21:33.276 00:21:33.276 ' 00:21:33.276 22:21:29 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:33.276 22:21:29 -- nvmf/common.sh@7 -- # uname -s 00:21:33.276 22:21:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.276 22:21:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.276 22:21:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.276 22:21:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.276 22:21:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.276 22:21:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.276 22:21:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.276 22:21:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.276 22:21:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.276 22:21:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.276 22:21:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:21:33.276 22:21:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:21:33.276 22:21:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.276 22:21:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.276 22:21:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:33.276 22:21:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.276 22:21:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.277 22:21:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.277 22:21:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.277 22:21:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.277 22:21:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.277 22:21:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.277 22:21:29 -- paths/export.sh@5 -- # export PATH 00:21:33.277 22:21:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.277 22:21:29 -- nvmf/common.sh@46 -- # : 0 00:21:33.277 22:21:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:33.277 22:21:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:33.277 22:21:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:33.277 22:21:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.277 22:21:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.277 22:21:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:33.277 22:21:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:33.277 22:21:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:33.277 22:21:29 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:33.277 22:21:29 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:33.277 22:21:29 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:33.277 22:21:29 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.277 22:21:29 -- host/failover.sh@18 -- # nvmftestinit 00:21:33.277 22:21:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:33.277 22:21:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.277 22:21:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:33.277 22:21:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:33.277 22:21:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:33.277 22:21:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.277 22:21:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.277 22:21:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.277 22:21:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:33.277 22:21:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:33.277 22:21:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:33.277 22:21:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:33.277 22:21:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:33.277 22:21:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:33.277 22:21:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.277 22:21:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.277 22:21:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:33.277 22:21:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:33.277 22:21:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:33.277 22:21:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:33.277 22:21:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:33.277 22:21:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.277 22:21:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:33.277 22:21:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:33.277 22:21:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:33.277 22:21:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:33.277 22:21:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:33.277 22:21:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:33.277 Cannot find device "nvmf_tgt_br" 00:21:33.277 22:21:29 -- nvmf/common.sh@154 -- # true 00:21:33.277 22:21:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:33.277 Cannot find device "nvmf_tgt_br2" 00:21:33.277 22:21:29 -- nvmf/common.sh@155 -- # true 00:21:33.277 22:21:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:33.277 22:21:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:33.277 Cannot find device "nvmf_tgt_br" 00:21:33.277 22:21:29 -- nvmf/common.sh@157 -- # true 00:21:33.277 22:21:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:33.536 Cannot find device "nvmf_tgt_br2" 00:21:33.536 22:21:29 -- nvmf/common.sh@158 -- # true 00:21:33.536 22:21:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:33.536 22:21:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:33.536 22:21:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:33.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.536 22:21:29 -- nvmf/common.sh@161 -- # true 00:21:33.536 22:21:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:33.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.536 22:21:29 -- nvmf/common.sh@162 -- # true 00:21:33.536 22:21:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:33.536 22:21:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:33.536 22:21:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:33.536 22:21:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:33.536 22:21:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:33.536 22:21:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:33.536 22:21:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:33.536 22:21:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:33.536 22:21:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:33.536 22:21:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:33.536 22:21:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:33.536 22:21:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:33.536 22:21:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:33.536 22:21:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:33.536 22:21:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:33.536 22:21:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:33.537 22:21:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:33.537 22:21:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:33.537 22:21:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:33.537 22:21:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:33.537 22:21:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:33.537 22:21:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:33.537 22:21:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:33.537 22:21:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:33.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:21:33.537 00:21:33.537 --- 10.0.0.2 ping statistics --- 00:21:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.537 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:33.537 22:21:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:33.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:33.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:21:33.537 00:21:33.537 --- 10.0.0.3 ping statistics --- 00:21:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.537 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:33.537 22:21:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:33.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:33.537 00:21:33.537 --- 10.0.0.1 ping statistics --- 00:21:33.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.537 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:33.537 22:21:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.537 22:21:30 -- nvmf/common.sh@421 -- # return 0 00:21:33.537 22:21:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:33.537 22:21:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.537 22:21:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:33.537 22:21:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:33.537 22:21:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.537 22:21:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:33.537 22:21:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:33.537 22:21:30 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:33.537 22:21:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:33.537 22:21:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.537 22:21:30 -- common/autotest_common.sh@10 -- # set +x 00:21:33.796 22:21:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:33.796 22:21:30 -- nvmf/common.sh@469 -- # nvmfpid=84800 00:21:33.796 22:21:30 -- nvmf/common.sh@470 -- # waitforlisten 84800 00:21:33.796 22:21:30 -- common/autotest_common.sh@829 -- # '[' -z 84800 ']' 00:21:33.796 22:21:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.796 22:21:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.796 22:21:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.796 22:21:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.796 22:21:30 -- common/autotest_common.sh@10 -- # set +x 00:21:33.796 [2024-11-17 22:21:30.205036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:33.796 [2024-11-17 22:21:30.205113] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.796 [2024-11-17 22:21:30.340013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:34.056 [2024-11-17 22:21:30.439779] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:34.056 [2024-11-17 22:21:30.440154] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.056 [2024-11-17 22:21:30.440203] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.056 [2024-11-17 22:21:30.440366] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.056 [2024-11-17 22:21:30.440548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.056 [2024-11-17 22:21:30.440686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.056 [2024-11-17 22:21:30.440693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.624 22:21:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.624 22:21:31 -- common/autotest_common.sh@862 -- # return 0 00:21:34.624 22:21:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:34.624 22:21:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.624 22:21:31 -- common/autotest_common.sh@10 -- # set +x 00:21:34.883 22:21:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.884 22:21:31 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:35.143 [2024-11-17 22:21:31.522523] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.143 22:21:31 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:35.402 Malloc0 00:21:35.402 22:21:31 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.662 22:21:32 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:35.921 22:21:32 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:36.180 [2024-11-17 22:21:32.539926] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.180 22:21:32 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:36.440 [2024-11-17 22:21:32.804124] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:36.440 22:21:32 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:36.699 [2024-11-17 22:21:33.080521] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:36.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.699 22:21:33 -- host/failover.sh@31 -- # bdevperf_pid=84913 00:21:36.699 22:21:33 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:36.699 22:21:33 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:36.699 22:21:33 -- host/failover.sh@34 -- # waitforlisten 84913 /var/tmp/bdevperf.sock 00:21:36.699 22:21:33 -- common/autotest_common.sh@829 -- # '[' -z 84913 ']' 00:21:36.699 22:21:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.699 22:21:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.699 22:21:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.699 22:21:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.699 22:21:33 -- common/autotest_common.sh@10 -- # set +x 00:21:37.636 22:21:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.636 22:21:34 -- common/autotest_common.sh@862 -- # return 0 00:21:37.636 22:21:34 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.896 NVMe0n1 00:21:37.896 22:21:34 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.155 00:21:38.155 22:21:34 -- host/failover.sh@39 -- # run_test_pid=84956 00:21:38.155 22:21:34 -- host/failover.sh@41 -- # sleep 1 00:21:38.155 22:21:34 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.093 22:21:35 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.354 [2024-11-17 22:21:35.910059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910265] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.354 [2024-11-17 22:21:35.910547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 [2024-11-17 22:21:35.910634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d65b0 is same with the state(5) to be set 00:21:39.355 22:21:35 -- host/failover.sh@45 -- # sleep 3 00:21:42.649 22:21:38 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:42.649 00:21:42.909 22:21:39 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:42.909 [2024-11-17 22:21:39.475663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475768] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.475996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476164] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 [2024-11-17 22:21:39.476192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7420 is same with the state(5) to be set 00:21:42.909 22:21:39 -- host/failover.sh@50 -- # sleep 3 00:21:46.197 22:21:42 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.197 [2024-11-17 22:21:42.751189] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.197 22:21:42 -- host/failover.sh@55 -- # sleep 1 00:21:47.577 22:21:43 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:47.577 [2024-11-17 22:21:43.969877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.577 [2024-11-17 22:21:43.969947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.577 [2024-11-17 22:21:43.969957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.577 [2024-11-17 22:21:43.969964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.577 [2024-11-17 22:21:43.969972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.577 [2024-11-17 22:21:43.969980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.577 [2024-11-17 22:21:43.970010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.577 [2024-11-17 22:21:43.970018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.577 [2024-11-17 22:21:43.970026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 [2024-11-17 22:21:43.970159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d7fb0 is same with the state(5) to be set 00:21:47.578 22:21:43 -- host/failover.sh@59 -- # wait 84956 00:21:54.148 0 00:21:54.148 22:21:49 -- host/failover.sh@61 -- # killprocess 84913 00:21:54.148 22:21:49 -- common/autotest_common.sh@936 -- # '[' -z 84913 ']' 00:21:54.148 22:21:49 -- common/autotest_common.sh@940 -- # kill -0 84913 00:21:54.148 22:21:49 -- common/autotest_common.sh@941 -- # uname 00:21:54.148 22:21:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:54.148 22:21:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84913 00:21:54.148 killing process with pid 84913 00:21:54.148 22:21:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:54.148 22:21:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:54.148 22:21:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84913' 00:21:54.148 22:21:49 -- common/autotest_common.sh@955 -- # kill 84913 00:21:54.148 22:21:49 -- common/autotest_common.sh@960 -- # wait 84913 00:21:54.148 22:21:50 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:54.148 [2024-11-17 22:21:33.135401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:54.148 [2024-11-17 22:21:33.135498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84913 ] 00:21:54.148 [2024-11-17 22:21:33.270576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.148 [2024-11-17 22:21:33.374860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.148 Running I/O for 15 seconds... 00:21:54.148 [2024-11-17 22:21:35.912726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.913052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.913205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.913286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.913358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.913439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.913502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.913575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.913645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.913720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.913850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.913939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.914042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.914129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.914206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.914329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.914397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.914471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.914538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.914619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.914702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.914795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.914925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.915003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.915077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.915183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.915252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.915346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.915415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.915508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.915578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.915650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.915721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.915850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.915935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.916022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.916100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.916204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.916274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.916350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.916421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.916493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.916562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.916639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.916708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.916818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.916895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.916993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.917069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.917160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.917230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.917303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.917374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.917447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.917512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.917592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.917654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.917729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.917841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.917916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.918001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.918087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.918159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.918233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.918311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.918389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.918463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.918538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.918608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.918683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.918782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.918872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.918952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.919027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.919098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.149 [2024-11-17 22:21:35.919185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.149 [2024-11-17 22:21:35.919254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.919328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.919404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.919494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.919556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.919629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.919697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.919808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.919886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.919968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.920040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.920151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.920220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.920293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.920354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.920424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.920484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.920575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.920645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.920722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.920824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.920902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.920987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.921078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.921162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.921236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.921296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.921373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.921442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.921514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.921582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.921675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.921761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.921856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.921924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.922022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.922105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.922177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.922251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.922359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.922431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.922518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.922593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.922669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.922741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.922859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.922939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.923047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.923149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.923221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.923301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.923407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.923480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.923558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.923623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.923708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.923785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.923896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.923975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.924085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.924154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.924226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.924293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.924386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.924448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.924519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.924589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.924661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.924770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.924873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.924951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.925032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.925161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.925252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.925321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.925412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.925487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.150 [2024-11-17 22:21:35.925593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.150 [2024-11-17 22:21:35.925666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.150 [2024-11-17 22:21:35.925750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.925847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.925939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.926032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.926112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.926185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.926299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.926371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.926443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.926512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.926585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.926655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.926727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.926831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.926924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.927000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.927081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.927199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.927283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.927353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.927422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.927489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.927565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.927634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.927705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.927810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.927885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.927951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.928029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.928125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.928236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.928300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.928372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.928443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.928515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.928592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.928660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.928732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.928847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.928926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.929021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.929118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.929195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.929269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.929377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.929442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.929546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.929603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.929667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.929733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.929832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.929926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.930032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.930109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.930184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.930287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.930375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.930437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.930506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.930564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.930638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.930721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.930831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.930907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.930981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.931057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.931180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.931265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.931350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.931423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.931490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.931558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.151 [2024-11-17 22:21:35.931657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.931723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.931819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.931907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.931986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.932059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.151 [2024-11-17 22:21:35.932176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.151 [2024-11-17 22:21:35.932242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:35.932326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.932395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:35.932471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.932532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.152 [2024-11-17 22:21:35.932656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.932756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:35.932883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.932966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:35.933098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:35.933242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:35.933382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:35.933489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:35.933520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:35.933546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f49a0 is same with the state(5) to be set 00:21:54.152 [2024-11-17 22:21:35.933575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:54.152 [2024-11-17 22:21:35.933585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:54.152 [2024-11-17 22:21:35.933595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15016 len:8 PRP1 0x0 PRP2 0x0 00:21:54.152 [2024-11-17 22:21:35.933607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933662] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8f49a0 was disconnected and freed. reset controller. 00:21:54.152 [2024-11-17 22:21:35.933678] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:54.152 [2024-11-17 22:21:35.933734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.152 [2024-11-17 22:21:35.933788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.152 [2024-11-17 22:21:35.933817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.152 [2024-11-17 22:21:35.933862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.152 [2024-11-17 22:21:35.933892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:35.933905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:54.152 [2024-11-17 22:21:35.933963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87f440 (9): Bad file descriptor 00:21:54.152 [2024-11-17 22:21:35.936872] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.152 [2024-11-17 22:21:35.969051] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:54.152 [2024-11-17 22:21:39.477617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.477902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.478016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.478125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.478201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.478294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.478371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.478440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.478508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.478586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.478646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.478712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.478793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.478885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.478956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.479030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.479090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.479168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.479228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.479299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.479368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.479445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.479513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.479590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.479651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.479724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.479820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.479891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.479977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.480048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.480131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.480201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.480284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.480354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.480430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.480507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.480576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.480645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.480721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.480808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.480900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.152 [2024-11-17 22:21:39.480973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.152 [2024-11-17 22:21:39.481044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.481134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.481203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.481276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.481346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.481415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.481485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.481546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.481605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.481677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.481739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.481856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.481931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.482021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.482102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.482173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.482247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.482336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.482397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.482464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.482524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.482596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.482658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.482732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.482816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.482891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.482961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.483030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.483098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.483174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.483236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.483305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.483373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.483456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.483530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.153 [2024-11-17 22:21:39.483591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.483663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.483760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.483829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.483902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.483964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.484035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.484097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.484173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.484234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.484310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.484372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.484440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.484508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.484586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.484655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.484729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.484837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.484914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.484977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.485048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.485109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.485180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.485254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.485323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.485384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.485452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.485526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.485586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.153 [2024-11-17 22:21:39.485653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.153 [2024-11-17 22:21:39.485722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.485835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.485909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.485972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.486063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.486130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.486209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.486286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.486358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.486419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.486487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.486548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.486606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.486675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.486743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.486823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.486914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.486976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.487045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.487105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.487171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.487231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.487309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.487370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.487442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.487503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.487574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.487633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.487701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.487807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.487886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.487976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.488050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.488113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.488196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.488257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.488346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.488414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.488485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.488545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.488615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.488674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.488744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.488815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.488890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.488951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.489025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.489104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.489165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.489223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.489289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.489357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.489416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.489484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.489559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.489627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.489698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.489797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.489872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.489934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.490032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.490110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.490185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.490247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.490350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.490416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.490489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.490548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.490605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.490693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.154 [2024-11-17 22:21:39.490782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.490825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.154 [2024-11-17 22:21:39.490860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.154 [2024-11-17 22:21:39.490887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.490901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.490916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.490930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.490945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.490958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.490973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.490996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.491081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.491109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.491138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.491296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.491323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.491399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.491424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.491499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.155 [2024-11-17 22:21:39.491524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.491773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.491784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.493496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.493559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.493628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.493687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.493776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.493857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.493928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.494014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.494087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.155 [2024-11-17 22:21:39.494156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.155 [2024-11-17 22:21:39.494216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877ae0 is same with the state(5) to be set 00:21:54.155 [2024-11-17 22:21:39.494322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:54.155 [2024-11-17 22:21:39.494389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:54.155 [2024-11-17 22:21:39.494454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84080 len:8 PRP1 0x0 PRP2 0x0 00:21:54.156 [2024-11-17 22:21:39.494551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:39.494674] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x877ae0 was disconnected and freed. reset controller. 00:21:54.156 [2024-11-17 22:21:39.494752] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:54.156 [2024-11-17 22:21:39.494889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.156 [2024-11-17 22:21:39.494978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:39.495060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.156 [2024-11-17 22:21:39.495156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:39.495214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.156 [2024-11-17 22:21:39.495271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:39.495336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.156 [2024-11-17 22:21:39.495403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:39.495459] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:54.156 [2024-11-17 22:21:39.495552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87f440 (9): Bad file descriptor 00:21:54.156 [2024-11-17 22:21:39.498093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.156 [2024-11-17 22:21:39.520593] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:54.156 [2024-11-17 22:21:43.971171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.971449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.971554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.971630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.971694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.971789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.971871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.971956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.972023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.972154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.972232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.972306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.972376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.972445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.972506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.972572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.972633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.972706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.972800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.972908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.972977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.973967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.973981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.974013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.974029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.974044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.974058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.974075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.156 [2024-11-17 22:21:43.974089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.974105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.156 [2024-11-17 22:21:43.974119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.974164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.974178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.974193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.974206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.974221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.156 [2024-11-17 22:21:43.974235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.156 [2024-11-17 22:21:43.974260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.974571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.974632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.974660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.974687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.974714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.974740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.974826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.974909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.974936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.974980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.974993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.157 [2024-11-17 22:21:43.975084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.157 [2024-11-17 22:21:43.975387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.157 [2024-11-17 22:21:43.975407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.975596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.975704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.975802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.975861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.975942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.975982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.975995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.976022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.976048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.976075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.976109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.976135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.976162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.976188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.976215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.976242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.158 [2024-11-17 22:21:43.976268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.158 [2024-11-17 22:21:43.976300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.158 [2024-11-17 22:21:43.976314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.159 [2024-11-17 22:21:43.976600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.159 [2024-11-17 22:21:43.976626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.159 [2024-11-17 22:21:43.976681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.159 [2024-11-17 22:21:43.976793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.159 [2024-11-17 22:21:43.976833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.159 [2024-11-17 22:21:43.976861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.976985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.976998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.977012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.977025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.977040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.159 [2024-11-17 22:21:43.977052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.977066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f21f0 is same with the state(5) to be set 00:21:54.159 [2024-11-17 22:21:43.977083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:54.159 [2024-11-17 22:21:43.977093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:54.159 [2024-11-17 22:21:43.977104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9736 len:8 PRP1 0x0 PRP2 0x0 00:21:54.159 [2024-11-17 22:21:43.977116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.977186] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8f21f0 was disconnected and freed. reset controller. 00:21:54.159 [2024-11-17 22:21:43.977203] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:54.159 [2024-11-17 22:21:43.977259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.159 [2024-11-17 22:21:43.977279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.977302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.159 [2024-11-17 22:21:43.977316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.977329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.159 [2024-11-17 22:21:43.977342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.977354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.159 [2024-11-17 22:21:43.977366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.159 [2024-11-17 22:21:43.977378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:54.159 [2024-11-17 22:21:43.979645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.159 [2024-11-17 22:21:43.979682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87f440 (9): Bad file descriptor 00:21:54.159 [2024-11-17 22:21:44.011959] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:54.159 00:21:54.159 Latency(us) 00:21:54.159 [2024-11-17T22:21:50.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.159 [2024-11-17T22:21:50.774Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:54.159 Verification LBA range: start 0x0 length 0x4000 00:21:54.159 NVMe0n1 : 15.01 15104.25 59.00 334.57 0.00 8275.74 621.85 29550.78 00:21:54.159 [2024-11-17T22:21:50.774Z] =================================================================================================================== 00:21:54.159 [2024-11-17T22:21:50.774Z] Total : 15104.25 59.00 334.57 0.00 8275.74 621.85 29550.78 00:21:54.159 Received shutdown signal, test time was about 15.000000 seconds 00:21:54.159 00:21:54.159 Latency(us) 00:21:54.159 [2024-11-17T22:21:50.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.160 [2024-11-17T22:21:50.775Z] =================================================================================================================== 00:21:54.160 [2024-11-17T22:21:50.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.160 22:21:50 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:54.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.160 22:21:50 -- host/failover.sh@65 -- # count=3 00:21:54.160 22:21:50 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:54.160 22:21:50 -- host/failover.sh@73 -- # bdevperf_pid=85160 00:21:54.160 22:21:50 -- host/failover.sh@75 -- # waitforlisten 85160 /var/tmp/bdevperf.sock 00:21:54.160 22:21:50 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:54.160 22:21:50 -- common/autotest_common.sh@829 -- # '[' -z 85160 ']' 00:21:54.160 22:21:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.160 22:21:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.160 22:21:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.160 22:21:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.160 22:21:50 -- common/autotest_common.sh@10 -- # set +x 00:21:54.419 22:21:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.419 22:21:51 -- common/autotest_common.sh@862 -- # return 0 00:21:54.419 22:21:51 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:54.676 [2024-11-17 22:21:51.261177] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:54.676 22:21:51 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:55.242 [2024-11-17 22:21:51.549453] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:55.242 22:21:51 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:55.242 NVMe0n1 00:21:55.242 22:21:51 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:55.501 00:21:55.760 22:21:52 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:55.760 00:21:56.019 22:21:52 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.019 22:21:52 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:56.278 22:21:52 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.278 22:21:52 -- host/failover.sh@87 -- # sleep 3 00:21:59.565 22:21:55 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:59.565 22:21:55 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:59.565 22:21:56 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:59.565 22:21:56 -- host/failover.sh@90 -- # run_test_pid=85297 00:21:59.565 22:21:56 -- host/failover.sh@92 -- # wait 85297 00:22:00.942 0 00:22:00.942 22:21:57 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:00.942 [2024-11-17 22:21:50.105938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:00.942 [2024-11-17 22:21:50.106107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85160 ] 00:22:00.942 [2024-11-17 22:21:50.237117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.942 [2024-11-17 22:21:50.330694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.942 [2024-11-17 22:21:52.861469] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:00.942 [2024-11-17 22:21:52.861581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.942 [2024-11-17 22:21:52.861604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.942 [2024-11-17 22:21:52.861620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.942 [2024-11-17 22:21:52.861633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.942 [2024-11-17 22:21:52.861646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.942 [2024-11-17 22:21:52.861659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.942 [2024-11-17 22:21:52.861672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.942 [2024-11-17 22:21:52.861684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.942 [2024-11-17 22:21:52.861697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.942 [2024-11-17 22:21:52.861759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.942 [2024-11-17 22:21:52.861804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f440 (9): Bad file descriptor 00:22:00.942 [2024-11-17 22:21:52.872418] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:00.942 Running I/O for 1 seconds... 00:22:00.942 00:22:00.942 Latency(us) 00:22:00.942 [2024-11-17T22:21:57.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.942 [2024-11-17T22:21:57.557Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:00.942 Verification LBA range: start 0x0 length 0x4000 00:22:00.942 NVMe0n1 : 1.01 13942.02 54.46 0.00 0.00 9139.31 1228.80 17873.45 00:22:00.942 [2024-11-17T22:21:57.557Z] =================================================================================================================== 00:22:00.942 [2024-11-17T22:21:57.557Z] Total : 13942.02 54.46 0.00 0.00 9139.31 1228.80 17873.45 00:22:00.942 22:21:57 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.942 22:21:57 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:01.201 22:21:57 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.460 22:21:57 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.460 22:21:57 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:01.719 22:21:58 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.719 22:21:58 -- host/failover.sh@101 -- # sleep 3 00:22:05.006 22:22:01 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.006 22:22:01 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:05.006 22:22:01 -- host/failover.sh@108 -- # killprocess 85160 00:22:05.006 22:22:01 -- common/autotest_common.sh@936 -- # '[' -z 85160 ']' 00:22:05.006 22:22:01 -- common/autotest_common.sh@940 -- # kill -0 85160 00:22:05.006 22:22:01 -- common/autotest_common.sh@941 -- # uname 00:22:05.006 22:22:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:05.006 22:22:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85160 00:22:05.006 killing process with pid 85160 00:22:05.006 22:22:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:05.006 22:22:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:05.006 22:22:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85160' 00:22:05.006 22:22:01 -- common/autotest_common.sh@955 -- # kill 85160 00:22:05.006 22:22:01 -- common/autotest_common.sh@960 -- # wait 85160 00:22:05.265 22:22:01 -- host/failover.sh@110 -- # sync 00:22:05.265 22:22:01 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.834 22:22:02 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:05.834 22:22:02 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:05.834 22:22:02 -- host/failover.sh@116 -- # nvmftestfini 00:22:05.834 22:22:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:05.834 22:22:02 -- nvmf/common.sh@116 -- # sync 00:22:05.834 22:22:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:05.834 22:22:02 -- nvmf/common.sh@119 -- # set +e 00:22:05.834 22:22:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:05.834 22:22:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:05.834 rmmod nvme_tcp 00:22:05.834 rmmod nvme_fabrics 00:22:05.834 rmmod nvme_keyring 00:22:05.834 22:22:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:05.834 22:22:02 -- nvmf/common.sh@123 -- # set -e 00:22:05.834 22:22:02 -- nvmf/common.sh@124 -- # return 0 00:22:05.834 22:22:02 -- nvmf/common.sh@477 -- # '[' -n 84800 ']' 00:22:05.834 22:22:02 -- nvmf/common.sh@478 -- # killprocess 84800 00:22:05.834 22:22:02 -- common/autotest_common.sh@936 -- # '[' -z 84800 ']' 00:22:05.834 22:22:02 -- common/autotest_common.sh@940 -- # kill -0 84800 00:22:05.834 22:22:02 -- common/autotest_common.sh@941 -- # uname 00:22:05.834 22:22:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:05.834 22:22:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84800 00:22:05.834 22:22:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:05.834 22:22:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:05.834 killing process with pid 84800 00:22:05.834 22:22:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84800' 00:22:05.834 22:22:02 -- common/autotest_common.sh@955 -- # kill 84800 00:22:05.834 22:22:02 -- common/autotest_common.sh@960 -- # wait 84800 00:22:06.093 22:22:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:06.093 22:22:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:06.093 22:22:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:06.093 22:22:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.093 22:22:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:06.093 22:22:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.093 22:22:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.093 22:22:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.093 22:22:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:06.093 00:22:06.093 real 0m32.985s 00:22:06.093 user 2m7.245s 00:22:06.093 sys 0m4.940s 00:22:06.093 22:22:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:06.093 ************************************ 00:22:06.093 END TEST nvmf_failover 00:22:06.093 ************************************ 00:22:06.093 22:22:02 -- common/autotest_common.sh@10 -- # set +x 00:22:06.093 22:22:02 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:06.093 22:22:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:06.093 22:22:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:06.093 22:22:02 -- common/autotest_common.sh@10 -- # set +x 00:22:06.093 ************************************ 00:22:06.093 START TEST nvmf_discovery 00:22:06.093 ************************************ 00:22:06.093 22:22:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:06.352 * Looking for test storage... 00:22:06.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:06.352 22:22:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:06.352 22:22:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:06.352 22:22:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:06.352 22:22:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:06.352 22:22:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:06.352 22:22:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:06.352 22:22:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:06.352 22:22:02 -- scripts/common.sh@335 -- # IFS=.-: 00:22:06.352 22:22:02 -- scripts/common.sh@335 -- # read -ra ver1 00:22:06.352 22:22:02 -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.352 22:22:02 -- scripts/common.sh@336 -- # read -ra ver2 00:22:06.352 22:22:02 -- scripts/common.sh@337 -- # local 'op=<' 00:22:06.352 22:22:02 -- scripts/common.sh@339 -- # ver1_l=2 00:22:06.352 22:22:02 -- scripts/common.sh@340 -- # ver2_l=1 00:22:06.352 22:22:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:06.352 22:22:02 -- scripts/common.sh@343 -- # case "$op" in 00:22:06.352 22:22:02 -- scripts/common.sh@344 -- # : 1 00:22:06.352 22:22:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:06.352 22:22:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.352 22:22:02 -- scripts/common.sh@364 -- # decimal 1 00:22:06.352 22:22:02 -- scripts/common.sh@352 -- # local d=1 00:22:06.352 22:22:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.352 22:22:02 -- scripts/common.sh@354 -- # echo 1 00:22:06.352 22:22:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:06.352 22:22:02 -- scripts/common.sh@365 -- # decimal 2 00:22:06.352 22:22:02 -- scripts/common.sh@352 -- # local d=2 00:22:06.352 22:22:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.352 22:22:02 -- scripts/common.sh@354 -- # echo 2 00:22:06.352 22:22:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:06.352 22:22:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:06.352 22:22:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:06.352 22:22:02 -- scripts/common.sh@367 -- # return 0 00:22:06.352 22:22:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.352 22:22:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.352 --rc genhtml_branch_coverage=1 00:22:06.352 --rc genhtml_function_coverage=1 00:22:06.352 --rc genhtml_legend=1 00:22:06.352 --rc geninfo_all_blocks=1 00:22:06.352 --rc geninfo_unexecuted_blocks=1 00:22:06.352 00:22:06.352 ' 00:22:06.352 22:22:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.352 --rc genhtml_branch_coverage=1 00:22:06.352 --rc genhtml_function_coverage=1 00:22:06.352 --rc genhtml_legend=1 00:22:06.352 --rc geninfo_all_blocks=1 00:22:06.352 --rc geninfo_unexecuted_blocks=1 00:22:06.352 00:22:06.352 ' 00:22:06.352 22:22:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.352 --rc genhtml_branch_coverage=1 00:22:06.353 --rc genhtml_function_coverage=1 00:22:06.353 --rc genhtml_legend=1 00:22:06.353 --rc geninfo_all_blocks=1 00:22:06.353 --rc geninfo_unexecuted_blocks=1 00:22:06.353 00:22:06.353 ' 00:22:06.353 22:22:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.353 --rc genhtml_branch_coverage=1 00:22:06.353 --rc genhtml_function_coverage=1 00:22:06.353 --rc genhtml_legend=1 00:22:06.353 --rc geninfo_all_blocks=1 00:22:06.353 --rc geninfo_unexecuted_blocks=1 00:22:06.353 00:22:06.353 ' 00:22:06.353 22:22:02 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:06.353 22:22:02 -- nvmf/common.sh@7 -- # uname -s 00:22:06.353 22:22:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.353 22:22:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.353 22:22:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.353 22:22:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.353 22:22:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.353 22:22:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.353 22:22:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.353 22:22:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.353 22:22:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.353 22:22:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.353 22:22:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:22:06.353 22:22:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:22:06.353 22:22:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.353 22:22:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.353 22:22:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:06.353 22:22:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:06.353 22:22:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.353 22:22:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.353 22:22:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.353 22:22:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.353 22:22:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.353 22:22:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.353 22:22:02 -- paths/export.sh@5 -- # export PATH 00:22:06.353 22:22:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.353 22:22:02 -- nvmf/common.sh@46 -- # : 0 00:22:06.353 22:22:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:06.353 22:22:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:06.353 22:22:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:06.353 22:22:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.353 22:22:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.353 22:22:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:06.353 22:22:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:06.353 22:22:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:06.353 22:22:02 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:06.353 22:22:02 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:06.353 22:22:02 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:06.353 22:22:02 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:06.353 22:22:02 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:06.353 22:22:02 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:06.353 22:22:02 -- host/discovery.sh@25 -- # nvmftestinit 00:22:06.353 22:22:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:06.353 22:22:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.353 22:22:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:06.353 22:22:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:06.353 22:22:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:06.353 22:22:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.353 22:22:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.353 22:22:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.353 22:22:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:06.353 22:22:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:06.353 22:22:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:06.353 22:22:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:06.353 22:22:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:06.353 22:22:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:06.353 22:22:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.353 22:22:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.353 22:22:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:06.353 22:22:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:06.353 22:22:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:06.353 22:22:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:06.353 22:22:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:06.353 22:22:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.353 22:22:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:06.353 22:22:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:06.353 22:22:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:06.353 22:22:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:06.353 22:22:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:06.353 22:22:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:06.353 Cannot find device "nvmf_tgt_br" 00:22:06.353 22:22:02 -- nvmf/common.sh@154 -- # true 00:22:06.353 22:22:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:06.353 Cannot find device "nvmf_tgt_br2" 00:22:06.353 22:22:02 -- nvmf/common.sh@155 -- # true 00:22:06.353 22:22:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:06.353 22:22:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:06.353 Cannot find device "nvmf_tgt_br" 00:22:06.353 22:22:02 -- nvmf/common.sh@157 -- # true 00:22:06.353 22:22:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:06.353 Cannot find device "nvmf_tgt_br2" 00:22:06.353 22:22:02 -- nvmf/common.sh@158 -- # true 00:22:06.353 22:22:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:06.353 22:22:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:06.616 22:22:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:06.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:06.616 22:22:02 -- nvmf/common.sh@161 -- # true 00:22:06.616 22:22:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:06.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:06.616 22:22:02 -- nvmf/common.sh@162 -- # true 00:22:06.616 22:22:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:06.616 22:22:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:06.616 22:22:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:06.616 22:22:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:06.616 22:22:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:06.616 22:22:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:06.616 22:22:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:06.616 22:22:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:06.616 22:22:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:06.616 22:22:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:06.616 22:22:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:06.616 22:22:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:06.616 22:22:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:06.616 22:22:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:06.616 22:22:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:06.616 22:22:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:06.616 22:22:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:06.616 22:22:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:06.616 22:22:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:06.616 22:22:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:06.616 22:22:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:06.616 22:22:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:06.616 22:22:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:06.616 22:22:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:06.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:22:06.616 00:22:06.616 --- 10.0.0.2 ping statistics --- 00:22:06.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.616 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:06.616 22:22:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:06.616 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:06.616 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:22:06.616 00:22:06.616 --- 10.0.0.3 ping statistics --- 00:22:06.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.616 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:06.616 22:22:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:06.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:22:06.616 00:22:06.616 --- 10.0.0.1 ping statistics --- 00:22:06.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.616 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:22:06.616 22:22:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.616 22:22:03 -- nvmf/common.sh@421 -- # return 0 00:22:06.616 22:22:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:06.616 22:22:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.616 22:22:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:06.616 22:22:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:06.616 22:22:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.616 22:22:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:06.616 22:22:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:06.616 22:22:03 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:06.616 22:22:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:06.616 22:22:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.616 22:22:03 -- common/autotest_common.sh@10 -- # set +x 00:22:06.616 22:22:03 -- nvmf/common.sh@469 -- # nvmfpid=85609 00:22:06.616 22:22:03 -- nvmf/common.sh@470 -- # waitforlisten 85609 00:22:06.616 22:22:03 -- common/autotest_common.sh@829 -- # '[' -z 85609 ']' 00:22:06.616 22:22:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:06.616 22:22:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.616 22:22:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.616 22:22:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.616 22:22:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.616 22:22:03 -- common/autotest_common.sh@10 -- # set +x 00:22:06.898 [2024-11-17 22:22:03.280841] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:06.898 [2024-11-17 22:22:03.280931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.898 [2024-11-17 22:22:03.419540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.179 [2024-11-17 22:22:03.528854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:07.179 [2024-11-17 22:22:03.528996] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.179 [2024-11-17 22:22:03.529009] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.179 [2024-11-17 22:22:03.529018] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.179 [2024-11-17 22:22:03.529053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.754 22:22:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.754 22:22:04 -- common/autotest_common.sh@862 -- # return 0 00:22:07.754 22:22:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:07.754 22:22:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.754 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:22:07.754 22:22:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.754 22:22:04 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.754 22:22:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.754 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:22:07.754 [2024-11-17 22:22:04.351718] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.754 22:22:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.754 22:22:04 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:07.754 22:22:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.754 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:22:07.754 [2024-11-17 22:22:04.359903] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:07.754 22:22:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.754 22:22:04 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:07.754 22:22:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.754 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:22:08.013 null0 00:22:08.013 22:22:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.013 22:22:04 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:08.013 22:22:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.013 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:22:08.013 null1 00:22:08.013 22:22:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.013 22:22:04 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:08.013 22:22:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.013 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:22:08.013 22:22:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.013 22:22:04 -- host/discovery.sh@45 -- # hostpid=85659 00:22:08.013 22:22:04 -- host/discovery.sh@46 -- # waitforlisten 85659 /tmp/host.sock 00:22:08.013 22:22:04 -- common/autotest_common.sh@829 -- # '[' -z 85659 ']' 00:22:08.013 22:22:04 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:08.013 22:22:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:08.013 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:08.013 22:22:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.013 22:22:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:08.013 22:22:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.013 22:22:04 -- common/autotest_common.sh@10 -- # set +x 00:22:08.013 [2024-11-17 22:22:04.434860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:08.013 [2024-11-17 22:22:04.434944] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85659 ] 00:22:08.013 [2024-11-17 22:22:04.569666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.271 [2024-11-17 22:22:04.666571] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:08.271 [2024-11-17 22:22:04.666795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.839 22:22:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.839 22:22:05 -- common/autotest_common.sh@862 -- # return 0 00:22:08.839 22:22:05 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:08.839 22:22:05 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:08.839 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.839 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:08.839 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.839 22:22:05 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:08.839 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.839 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:08.839 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.839 22:22:05 -- host/discovery.sh@72 -- # notify_id=0 00:22:08.839 22:22:05 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:08.839 22:22:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:08.839 22:22:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:08.839 22:22:05 -- host/discovery.sh@59 -- # sort 00:22:08.839 22:22:05 -- host/discovery.sh@59 -- # xargs 00:22:08.839 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.839 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:08.839 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.839 22:22:05 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:08.839 22:22:05 -- host/discovery.sh@79 -- # get_bdev_list 00:22:08.839 22:22:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:08.839 22:22:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.839 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.839 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:08.839 22:22:05 -- host/discovery.sh@55 -- # sort 00:22:08.839 22:22:05 -- host/discovery.sh@55 -- # xargs 00:22:09.098 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.098 22:22:05 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:09.098 22:22:05 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:09.098 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.098 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.098 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.098 22:22:05 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:09.098 22:22:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.098 22:22:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.098 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.098 22:22:05 -- host/discovery.sh@59 -- # xargs 00:22:09.098 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.098 22:22:05 -- host/discovery.sh@59 -- # sort 00:22:09.098 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.098 22:22:05 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:09.098 22:22:05 -- host/discovery.sh@83 -- # get_bdev_list 00:22:09.098 22:22:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.098 22:22:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.098 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.098 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.098 22:22:05 -- host/discovery.sh@55 -- # sort 00:22:09.098 22:22:05 -- host/discovery.sh@55 -- # xargs 00:22:09.098 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.098 22:22:05 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:09.098 22:22:05 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:09.098 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.098 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.098 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.098 22:22:05 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:09.098 22:22:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.098 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.098 22:22:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.098 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.098 22:22:05 -- host/discovery.sh@59 -- # sort 00:22:09.098 22:22:05 -- host/discovery.sh@59 -- # xargs 00:22:09.098 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.098 22:22:05 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:09.098 22:22:05 -- host/discovery.sh@87 -- # get_bdev_list 00:22:09.098 22:22:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.098 22:22:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.098 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.098 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.098 22:22:05 -- host/discovery.sh@55 -- # sort 00:22:09.098 22:22:05 -- host/discovery.sh@55 -- # xargs 00:22:09.098 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.357 22:22:05 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:09.357 22:22:05 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:09.357 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.357 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.357 [2024-11-17 22:22:05.736087] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.357 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.357 22:22:05 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:09.357 22:22:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.357 22:22:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.357 22:22:05 -- host/discovery.sh@59 -- # sort 00:22:09.357 22:22:05 -- host/discovery.sh@59 -- # xargs 00:22:09.357 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.357 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.357 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.357 22:22:05 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:09.357 22:22:05 -- host/discovery.sh@93 -- # get_bdev_list 00:22:09.357 22:22:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.357 22:22:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.357 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.357 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.357 22:22:05 -- host/discovery.sh@55 -- # sort 00:22:09.357 22:22:05 -- host/discovery.sh@55 -- # xargs 00:22:09.357 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.357 22:22:05 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:09.357 22:22:05 -- host/discovery.sh@94 -- # get_notification_count 00:22:09.357 22:22:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:09.357 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.357 22:22:05 -- host/discovery.sh@74 -- # jq '. | length' 00:22:09.357 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.357 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.357 22:22:05 -- host/discovery.sh@74 -- # notification_count=0 00:22:09.357 22:22:05 -- host/discovery.sh@75 -- # notify_id=0 00:22:09.357 22:22:05 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:09.357 22:22:05 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:09.357 22:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.357 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.357 22:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.357 22:22:05 -- host/discovery.sh@100 -- # sleep 1 00:22:09.925 [2024-11-17 22:22:06.394568] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:09.925 [2024-11-17 22:22:06.394599] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:09.925 [2024-11-17 22:22:06.394615] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:09.925 [2024-11-17 22:22:06.480657] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:09.925 [2024-11-17 22:22:06.536590] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:09.925 [2024-11-17 22:22:06.536650] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:10.492 22:22:06 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:10.492 22:22:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:10.492 22:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.492 22:22:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:10.492 22:22:06 -- host/discovery.sh@59 -- # sort 00:22:10.492 22:22:06 -- common/autotest_common.sh@10 -- # set +x 00:22:10.492 22:22:06 -- host/discovery.sh@59 -- # xargs 00:22:10.492 22:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.492 22:22:06 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.492 22:22:06 -- host/discovery.sh@102 -- # get_bdev_list 00:22:10.492 22:22:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.492 22:22:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.492 22:22:06 -- common/autotest_common.sh@10 -- # set +x 00:22:10.492 22:22:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:10.492 22:22:06 -- host/discovery.sh@55 -- # sort 00:22:10.492 22:22:06 -- host/discovery.sh@55 -- # xargs 00:22:10.492 22:22:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.492 22:22:07 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:10.492 22:22:07 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:10.492 22:22:07 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:10.492 22:22:07 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:10.492 22:22:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.492 22:22:07 -- common/autotest_common.sh@10 -- # set +x 00:22:10.492 22:22:07 -- host/discovery.sh@63 -- # sort -n 00:22:10.492 22:22:07 -- host/discovery.sh@63 -- # xargs 00:22:10.492 22:22:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.492 22:22:07 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:10.492 22:22:07 -- host/discovery.sh@104 -- # get_notification_count 00:22:10.492 22:22:07 -- host/discovery.sh@74 -- # jq '. | length' 00:22:10.492 22:22:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:10.492 22:22:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.492 22:22:07 -- common/autotest_common.sh@10 -- # set +x 00:22:10.492 22:22:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.750 22:22:07 -- host/discovery.sh@74 -- # notification_count=1 00:22:10.750 22:22:07 -- host/discovery.sh@75 -- # notify_id=1 00:22:10.750 22:22:07 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:10.750 22:22:07 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:10.751 22:22:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.751 22:22:07 -- common/autotest_common.sh@10 -- # set +x 00:22:10.751 22:22:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.751 22:22:07 -- host/discovery.sh@109 -- # sleep 1 00:22:11.688 22:22:08 -- host/discovery.sh@110 -- # get_bdev_list 00:22:11.688 22:22:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.688 22:22:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.688 22:22:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.688 22:22:08 -- common/autotest_common.sh@10 -- # set +x 00:22:11.688 22:22:08 -- host/discovery.sh@55 -- # sort 00:22:11.688 22:22:08 -- host/discovery.sh@55 -- # xargs 00:22:11.688 22:22:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.688 22:22:08 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:11.688 22:22:08 -- host/discovery.sh@111 -- # get_notification_count 00:22:11.688 22:22:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:11.688 22:22:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.688 22:22:08 -- common/autotest_common.sh@10 -- # set +x 00:22:11.688 22:22:08 -- host/discovery.sh@74 -- # jq '. | length' 00:22:11.688 22:22:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.688 22:22:08 -- host/discovery.sh@74 -- # notification_count=1 00:22:11.688 22:22:08 -- host/discovery.sh@75 -- # notify_id=2 00:22:11.688 22:22:08 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:11.688 22:22:08 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:11.688 22:22:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.688 22:22:08 -- common/autotest_common.sh@10 -- # set +x 00:22:11.688 [2024-11-17 22:22:08.253201] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:11.688 [2024-11-17 22:22:08.254220] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:11.688 [2024-11-17 22:22:08.254269] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:11.688 22:22:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.688 22:22:08 -- host/discovery.sh@117 -- # sleep 1 00:22:11.947 [2024-11-17 22:22:08.340275] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:11.947 [2024-11-17 22:22:08.397473] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:11.947 [2024-11-17 22:22:08.397514] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:11.947 [2024-11-17 22:22:08.397520] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:12.882 22:22:09 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:12.882 22:22:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:12.882 22:22:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.882 22:22:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:12.882 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:22:12.882 22:22:09 -- host/discovery.sh@59 -- # sort 00:22:12.882 22:22:09 -- host/discovery.sh@59 -- # xargs 00:22:12.882 22:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.882 22:22:09 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.882 22:22:09 -- host/discovery.sh@119 -- # get_bdev_list 00:22:12.882 22:22:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.882 22:22:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.882 22:22:09 -- host/discovery.sh@55 -- # sort 00:22:12.882 22:22:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.882 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:22:12.882 22:22:09 -- host/discovery.sh@55 -- # xargs 00:22:12.882 22:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.882 22:22:09 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:12.882 22:22:09 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:12.882 22:22:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:12.882 22:22:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:12.882 22:22:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.882 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:22:12.882 22:22:09 -- host/discovery.sh@63 -- # sort -n 00:22:12.882 22:22:09 -- host/discovery.sh@63 -- # xargs 00:22:12.882 22:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.882 22:22:09 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:12.882 22:22:09 -- host/discovery.sh@121 -- # get_notification_count 00:22:12.882 22:22:09 -- host/discovery.sh@74 -- # jq '. | length' 00:22:12.882 22:22:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:12.882 22:22:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.882 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:22:12.882 22:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.882 22:22:09 -- host/discovery.sh@74 -- # notification_count=0 00:22:12.882 22:22:09 -- host/discovery.sh@75 -- # notify_id=2 00:22:12.882 22:22:09 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:12.882 22:22:09 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:12.882 22:22:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.882 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:22:12.882 [2024-11-17 22:22:09.481705] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:12.882 [2024-11-17 22:22:09.481777] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:12.882 [2024-11-17 22:22:09.484240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.882 [2024-11-17 22:22:09.484290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.882 [2024-11-17 22:22:09.484301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.882 [2024-11-17 22:22:09.484309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.882 [2024-11-17 22:22:09.484318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.882 [2024-11-17 22:22:09.484325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.882 [2024-11-17 22:22:09.484334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.882 [2024-11-17 22:22:09.484341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.882 [2024-11-17 22:22:09.484349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e89c0 is same with the state(5) to be set 00:22:12.882 22:22:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.882 22:22:09 -- host/discovery.sh@127 -- # sleep 1 00:22:12.882 [2024-11-17 22:22:09.494205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e89c0 (9): Bad file descriptor 00:22:13.142 [2024-11-17 22:22:09.504221] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.142 [2024-11-17 22:22:09.504314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.504356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.504372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e89c0 with addr=10.0.0.2, port=4420 00:22:13.142 [2024-11-17 22:22:09.504381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e89c0 is same with the state(5) to be set 00:22:13.142 [2024-11-17 22:22:09.504395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e89c0 (9): Bad file descriptor 00:22:13.142 [2024-11-17 22:22:09.504407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.142 [2024-11-17 22:22:09.504415] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.142 [2024-11-17 22:22:09.504423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.142 [2024-11-17 22:22:09.504436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.142 [2024-11-17 22:22:09.514269] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.142 [2024-11-17 22:22:09.514353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.514393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.514407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e89c0 with addr=10.0.0.2, port=4420 00:22:13.142 [2024-11-17 22:22:09.514416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e89c0 is same with the state(5) to be set 00:22:13.142 [2024-11-17 22:22:09.514430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e89c0 (9): Bad file descriptor 00:22:13.142 [2024-11-17 22:22:09.514443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.142 [2024-11-17 22:22:09.514450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.142 [2024-11-17 22:22:09.514457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.142 [2024-11-17 22:22:09.514469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.142 [2024-11-17 22:22:09.524321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.142 [2024-11-17 22:22:09.524390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.524428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.524442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e89c0 with addr=10.0.0.2, port=4420 00:22:13.142 [2024-11-17 22:22:09.524451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e89c0 is same with the state(5) to be set 00:22:13.142 [2024-11-17 22:22:09.524464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e89c0 (9): Bad file descriptor 00:22:13.142 [2024-11-17 22:22:09.524475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.142 [2024-11-17 22:22:09.524483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.142 [2024-11-17 22:22:09.524490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.142 [2024-11-17 22:22:09.524502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.142 [2024-11-17 22:22:09.534366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.142 [2024-11-17 22:22:09.534454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.534493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.534507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e89c0 with addr=10.0.0.2, port=4420 00:22:13.142 [2024-11-17 22:22:09.534517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e89c0 is same with the state(5) to be set 00:22:13.142 [2024-11-17 22:22:09.534531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e89c0 (9): Bad file descriptor 00:22:13.142 [2024-11-17 22:22:09.534543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.142 [2024-11-17 22:22:09.534550] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.142 [2024-11-17 22:22:09.534557] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.142 [2024-11-17 22:22:09.534569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.142 [2024-11-17 22:22:09.544415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.142 [2024-11-17 22:22:09.544490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.544528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.544542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e89c0 with addr=10.0.0.2, port=4420 00:22:13.142 [2024-11-17 22:22:09.544551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e89c0 is same with the state(5) to be set 00:22:13.142 [2024-11-17 22:22:09.544565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e89c0 (9): Bad file descriptor 00:22:13.142 [2024-11-17 22:22:09.544576] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.142 [2024-11-17 22:22:09.544584] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.142 [2024-11-17 22:22:09.544591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.142 [2024-11-17 22:22:09.544603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.142 [2024-11-17 22:22:09.554463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.142 [2024-11-17 22:22:09.554536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.554573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.554587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e89c0 with addr=10.0.0.2, port=4420 00:22:13.142 [2024-11-17 22:22:09.554596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e89c0 is same with the state(5) to be set 00:22:13.142 [2024-11-17 22:22:09.554610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e89c0 (9): Bad file descriptor 00:22:13.142 [2024-11-17 22:22:09.554621] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.142 [2024-11-17 22:22:09.554629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.142 [2024-11-17 22:22:09.554636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.142 [2024-11-17 22:22:09.554648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.142 [2024-11-17 22:22:09.564507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.142 [2024-11-17 22:22:09.564577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.564614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.142 [2024-11-17 22:22:09.564628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e89c0 with addr=10.0.0.2, port=4420 00:22:13.142 [2024-11-17 22:22:09.564638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e89c0 is same with the state(5) to be set 00:22:13.142 [2024-11-17 22:22:09.564650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e89c0 (9): Bad file descriptor 00:22:13.142 [2024-11-17 22:22:09.564662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.143 [2024-11-17 22:22:09.564669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.143 [2024-11-17 22:22:09.564677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.143 [2024-11-17 22:22:09.564688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.143 [2024-11-17 22:22:09.569773] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:13.143 [2024-11-17 22:22:09.569818] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:14.079 22:22:10 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:14.079 22:22:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.079 22:22:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.079 22:22:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.079 22:22:10 -- host/discovery.sh@59 -- # sort 00:22:14.079 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:22:14.079 22:22:10 -- host/discovery.sh@59 -- # xargs 00:22:14.079 22:22:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.079 22:22:10 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.079 22:22:10 -- host/discovery.sh@129 -- # get_bdev_list 00:22:14.079 22:22:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.079 22:22:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.079 22:22:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.079 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:22:14.079 22:22:10 -- host/discovery.sh@55 -- # sort 00:22:14.079 22:22:10 -- host/discovery.sh@55 -- # xargs 00:22:14.079 22:22:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.079 22:22:10 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:14.079 22:22:10 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:14.079 22:22:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:14.079 22:22:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:14.079 22:22:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.079 22:22:10 -- host/discovery.sh@63 -- # sort -n 00:22:14.079 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:22:14.079 22:22:10 -- host/discovery.sh@63 -- # xargs 00:22:14.079 22:22:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.079 22:22:10 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.079 22:22:10 -- host/discovery.sh@131 -- # get_notification_count 00:22:14.079 22:22:10 -- host/discovery.sh@74 -- # jq '. | length' 00:22:14.079 22:22:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:14.079 22:22:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.079 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:22:14.079 22:22:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.338 22:22:10 -- host/discovery.sh@74 -- # notification_count=0 00:22:14.338 22:22:10 -- host/discovery.sh@75 -- # notify_id=2 00:22:14.338 22:22:10 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:14.338 22:22:10 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:14.338 22:22:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.338 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:22:14.338 22:22:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.338 22:22:10 -- host/discovery.sh@135 -- # sleep 1 00:22:15.274 22:22:11 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:15.274 22:22:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:15.274 22:22:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:15.274 22:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.274 22:22:11 -- common/autotest_common.sh@10 -- # set +x 00:22:15.274 22:22:11 -- host/discovery.sh@59 -- # sort 00:22:15.274 22:22:11 -- host/discovery.sh@59 -- # xargs 00:22:15.274 22:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.274 22:22:11 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:15.274 22:22:11 -- host/discovery.sh@137 -- # get_bdev_list 00:22:15.274 22:22:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:15.274 22:22:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.274 22:22:11 -- host/discovery.sh@55 -- # sort 00:22:15.274 22:22:11 -- host/discovery.sh@55 -- # xargs 00:22:15.274 22:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.274 22:22:11 -- common/autotest_common.sh@10 -- # set +x 00:22:15.274 22:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.274 22:22:11 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:15.274 22:22:11 -- host/discovery.sh@138 -- # get_notification_count 00:22:15.274 22:22:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:15.274 22:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.274 22:22:11 -- common/autotest_common.sh@10 -- # set +x 00:22:15.274 22:22:11 -- host/discovery.sh@74 -- # jq '. | length' 00:22:15.274 22:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.274 22:22:11 -- host/discovery.sh@74 -- # notification_count=2 00:22:15.274 22:22:11 -- host/discovery.sh@75 -- # notify_id=4 00:22:15.274 22:22:11 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:15.274 22:22:11 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:15.274 22:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.274 22:22:11 -- common/autotest_common.sh@10 -- # set +x 00:22:16.652 [2024-11-17 22:22:12.895317] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:16.652 [2024-11-17 22:22:12.895342] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:16.652 [2024-11-17 22:22:12.895358] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:16.652 [2024-11-17 22:22:12.981405] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:16.652 [2024-11-17 22:22:13.040232] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:16.652 [2024-11-17 22:22:13.040269] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:16.652 22:22:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.652 22:22:13 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.652 22:22:13 -- common/autotest_common.sh@650 -- # local es=0 00:22:16.652 22:22:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.652 22:22:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:16.652 22:22:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.652 22:22:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:16.652 22:22:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.652 22:22:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.652 22:22:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.652 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:22:16.653 2024/11/17 22:22:13 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:16.653 request: 00:22:16.653 { 00:22:16.653 "method": "bdev_nvme_start_discovery", 00:22:16.653 "params": { 00:22:16.653 "name": "nvme", 00:22:16.653 "trtype": "tcp", 00:22:16.653 "traddr": "10.0.0.2", 00:22:16.653 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:16.653 "adrfam": "ipv4", 00:22:16.653 "trsvcid": "8009", 00:22:16.653 "wait_for_attach": true 00:22:16.653 } 00:22:16.653 } 00:22:16.653 Got JSON-RPC error response 00:22:16.653 GoRPCClient: error on JSON-RPC call 00:22:16.653 22:22:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:16.653 22:22:13 -- common/autotest_common.sh@653 -- # es=1 00:22:16.653 22:22:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.653 22:22:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.653 22:22:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.653 22:22:13 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:16.653 22:22:13 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:16.653 22:22:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.653 22:22:13 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:16.653 22:22:13 -- host/discovery.sh@67 -- # xargs 00:22:16.653 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:22:16.653 22:22:13 -- host/discovery.sh@67 -- # sort 00:22:16.653 22:22:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.653 22:22:13 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:16.653 22:22:13 -- host/discovery.sh@147 -- # get_bdev_list 00:22:16.653 22:22:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.653 22:22:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:16.653 22:22:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.653 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:22:16.653 22:22:13 -- host/discovery.sh@55 -- # sort 00:22:16.653 22:22:13 -- host/discovery.sh@55 -- # xargs 00:22:16.653 22:22:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.653 22:22:13 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:16.653 22:22:13 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.653 22:22:13 -- common/autotest_common.sh@650 -- # local es=0 00:22:16.653 22:22:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.653 22:22:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:16.653 22:22:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.653 22:22:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:16.653 22:22:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.653 22:22:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.653 22:22:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.653 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:22:16.653 2024/11/17 22:22:13 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:16.653 request: 00:22:16.653 { 00:22:16.653 "method": "bdev_nvme_start_discovery", 00:22:16.653 "params": { 00:22:16.653 "name": "nvme_second", 00:22:16.653 "trtype": "tcp", 00:22:16.653 "traddr": "10.0.0.2", 00:22:16.653 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:16.653 "adrfam": "ipv4", 00:22:16.653 "trsvcid": "8009", 00:22:16.653 "wait_for_attach": true 00:22:16.653 } 00:22:16.653 } 00:22:16.653 Got JSON-RPC error response 00:22:16.653 GoRPCClient: error on JSON-RPC call 00:22:16.653 22:22:13 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:16.653 22:22:13 -- common/autotest_common.sh@653 -- # es=1 00:22:16.653 22:22:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.653 22:22:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.653 22:22:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.653 22:22:13 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:16.653 22:22:13 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:16.653 22:22:13 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:16.653 22:22:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.653 22:22:13 -- host/discovery.sh@67 -- # xargs 00:22:16.653 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:22:16.653 22:22:13 -- host/discovery.sh@67 -- # sort 00:22:16.653 22:22:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.653 22:22:13 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:16.653 22:22:13 -- host/discovery.sh@153 -- # get_bdev_list 00:22:16.653 22:22:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.653 22:22:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:16.653 22:22:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.653 22:22:13 -- host/discovery.sh@55 -- # sort 00:22:16.653 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:22:16.653 22:22:13 -- host/discovery.sh@55 -- # xargs 00:22:16.912 22:22:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.912 22:22:13 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:16.912 22:22:13 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:16.912 22:22:13 -- common/autotest_common.sh@650 -- # local es=0 00:22:16.912 22:22:13 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:16.912 22:22:13 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:16.912 22:22:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.912 22:22:13 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:16.912 22:22:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.912 22:22:13 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:16.912 22:22:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.912 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:22:17.848 [2024-11-17 22:22:14.302655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.848 [2024-11-17 22:22:14.302722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.848 [2024-11-17 22:22:14.302748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e4970 with addr=10.0.0.2, port=8010 00:22:17.848 [2024-11-17 22:22:14.302762] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:17.848 [2024-11-17 22:22:14.302770] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:17.848 [2024-11-17 22:22:14.302777] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:18.784 [2024-11-17 22:22:15.302628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.784 [2024-11-17 22:22:15.302692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.784 [2024-11-17 22:22:15.302707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e4970 with addr=10.0.0.2, port=8010 00:22:18.784 [2024-11-17 22:22:15.302719] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:18.784 [2024-11-17 22:22:15.302727] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:18.784 [2024-11-17 22:22:15.302733] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:19.721 [2024-11-17 22:22:16.302563] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:19.721 2024/11/17 22:22:16 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:19.721 request: 00:22:19.721 { 00:22:19.721 "method": "bdev_nvme_start_discovery", 00:22:19.721 "params": { 00:22:19.721 "name": "nvme_second", 00:22:19.721 "trtype": "tcp", 00:22:19.721 "traddr": "10.0.0.2", 00:22:19.721 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:19.721 "adrfam": "ipv4", 00:22:19.721 "trsvcid": "8010", 00:22:19.721 "attach_timeout_ms": 3000 00:22:19.721 } 00:22:19.721 } 00:22:19.721 Got JSON-RPC error response 00:22:19.721 GoRPCClient: error on JSON-RPC call 00:22:19.721 22:22:16 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:19.721 22:22:16 -- common/autotest_common.sh@653 -- # es=1 00:22:19.721 22:22:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.721 22:22:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.721 22:22:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.721 22:22:16 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:19.721 22:22:16 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:19.721 22:22:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.721 22:22:16 -- common/autotest_common.sh@10 -- # set +x 00:22:19.721 22:22:16 -- host/discovery.sh@67 -- # sort 00:22:19.721 22:22:16 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:19.721 22:22:16 -- host/discovery.sh@67 -- # xargs 00:22:19.721 22:22:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.980 22:22:16 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:19.980 22:22:16 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:19.980 22:22:16 -- host/discovery.sh@162 -- # kill 85659 00:22:19.980 22:22:16 -- host/discovery.sh@163 -- # nvmftestfini 00:22:19.980 22:22:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:19.980 22:22:16 -- nvmf/common.sh@116 -- # sync 00:22:19.980 22:22:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:19.980 22:22:16 -- nvmf/common.sh@119 -- # set +e 00:22:19.980 22:22:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:19.980 22:22:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:19.980 rmmod nvme_tcp 00:22:19.980 rmmod nvme_fabrics 00:22:19.980 rmmod nvme_keyring 00:22:19.980 22:22:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:19.980 22:22:16 -- nvmf/common.sh@123 -- # set -e 00:22:19.980 22:22:16 -- nvmf/common.sh@124 -- # return 0 00:22:19.980 22:22:16 -- nvmf/common.sh@477 -- # '[' -n 85609 ']' 00:22:19.980 22:22:16 -- nvmf/common.sh@478 -- # killprocess 85609 00:22:19.980 22:22:16 -- common/autotest_common.sh@936 -- # '[' -z 85609 ']' 00:22:19.980 22:22:16 -- common/autotest_common.sh@940 -- # kill -0 85609 00:22:19.980 22:22:16 -- common/autotest_common.sh@941 -- # uname 00:22:19.980 22:22:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:19.980 22:22:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85609 00:22:19.980 22:22:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:19.980 killing process with pid 85609 00:22:19.980 22:22:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:19.980 22:22:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85609' 00:22:19.980 22:22:16 -- common/autotest_common.sh@955 -- # kill 85609 00:22:19.980 22:22:16 -- common/autotest_common.sh@960 -- # wait 85609 00:22:20.240 22:22:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:20.240 22:22:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:20.240 22:22:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:20.240 22:22:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.240 22:22:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:20.240 22:22:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.240 22:22:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.240 22:22:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.500 22:22:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:20.500 00:22:20.500 real 0m14.215s 00:22:20.500 user 0m27.604s 00:22:20.500 sys 0m1.653s 00:22:20.500 22:22:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:20.500 ************************************ 00:22:20.500 END TEST nvmf_discovery 00:22:20.500 ************************************ 00:22:20.500 22:22:16 -- common/autotest_common.sh@10 -- # set +x 00:22:20.500 22:22:16 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:20.500 22:22:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:20.500 22:22:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:20.500 22:22:16 -- common/autotest_common.sh@10 -- # set +x 00:22:20.500 ************************************ 00:22:20.500 START TEST nvmf_discovery_remove_ifc 00:22:20.500 ************************************ 00:22:20.500 22:22:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:20.500 * Looking for test storage... 00:22:20.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:20.500 22:22:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:20.500 22:22:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:20.500 22:22:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:20.500 22:22:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:20.500 22:22:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:20.500 22:22:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:20.500 22:22:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:20.500 22:22:17 -- scripts/common.sh@335 -- # IFS=.-: 00:22:20.500 22:22:17 -- scripts/common.sh@335 -- # read -ra ver1 00:22:20.500 22:22:17 -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.500 22:22:17 -- scripts/common.sh@336 -- # read -ra ver2 00:22:20.500 22:22:17 -- scripts/common.sh@337 -- # local 'op=<' 00:22:20.500 22:22:17 -- scripts/common.sh@339 -- # ver1_l=2 00:22:20.500 22:22:17 -- scripts/common.sh@340 -- # ver2_l=1 00:22:20.500 22:22:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:20.500 22:22:17 -- scripts/common.sh@343 -- # case "$op" in 00:22:20.500 22:22:17 -- scripts/common.sh@344 -- # : 1 00:22:20.500 22:22:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:20.500 22:22:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.500 22:22:17 -- scripts/common.sh@364 -- # decimal 1 00:22:20.500 22:22:17 -- scripts/common.sh@352 -- # local d=1 00:22:20.500 22:22:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.500 22:22:17 -- scripts/common.sh@354 -- # echo 1 00:22:20.500 22:22:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:20.500 22:22:17 -- scripts/common.sh@365 -- # decimal 2 00:22:20.500 22:22:17 -- scripts/common.sh@352 -- # local d=2 00:22:20.500 22:22:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.500 22:22:17 -- scripts/common.sh@354 -- # echo 2 00:22:20.500 22:22:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:20.500 22:22:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:20.500 22:22:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:20.500 22:22:17 -- scripts/common.sh@367 -- # return 0 00:22:20.500 22:22:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.500 22:22:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:20.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.500 --rc genhtml_branch_coverage=1 00:22:20.500 --rc genhtml_function_coverage=1 00:22:20.500 --rc genhtml_legend=1 00:22:20.500 --rc geninfo_all_blocks=1 00:22:20.500 --rc geninfo_unexecuted_blocks=1 00:22:20.500 00:22:20.500 ' 00:22:20.500 22:22:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:20.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.500 --rc genhtml_branch_coverage=1 00:22:20.500 --rc genhtml_function_coverage=1 00:22:20.500 --rc genhtml_legend=1 00:22:20.500 --rc geninfo_all_blocks=1 00:22:20.500 --rc geninfo_unexecuted_blocks=1 00:22:20.500 00:22:20.500 ' 00:22:20.500 22:22:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:20.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.500 --rc genhtml_branch_coverage=1 00:22:20.500 --rc genhtml_function_coverage=1 00:22:20.500 --rc genhtml_legend=1 00:22:20.500 --rc geninfo_all_blocks=1 00:22:20.500 --rc geninfo_unexecuted_blocks=1 00:22:20.500 00:22:20.500 ' 00:22:20.500 22:22:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:20.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.500 --rc genhtml_branch_coverage=1 00:22:20.500 --rc genhtml_function_coverage=1 00:22:20.500 --rc genhtml_legend=1 00:22:20.500 --rc geninfo_all_blocks=1 00:22:20.500 --rc geninfo_unexecuted_blocks=1 00:22:20.500 00:22:20.500 ' 00:22:20.500 22:22:17 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:20.500 22:22:17 -- nvmf/common.sh@7 -- # uname -s 00:22:20.500 22:22:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.500 22:22:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.500 22:22:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.500 22:22:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.500 22:22:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.500 22:22:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.500 22:22:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.500 22:22:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.500 22:22:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.500 22:22:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.500 22:22:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:22:20.500 22:22:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:22:20.500 22:22:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.500 22:22:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.500 22:22:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:20.500 22:22:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:20.500 22:22:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.500 22:22:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.500 22:22:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.500 22:22:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.500 22:22:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.500 22:22:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.500 22:22:17 -- paths/export.sh@5 -- # export PATH 00:22:20.500 22:22:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.500 22:22:17 -- nvmf/common.sh@46 -- # : 0 00:22:20.500 22:22:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:20.500 22:22:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:20.500 22:22:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:20.500 22:22:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.500 22:22:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.500 22:22:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:20.500 22:22:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:20.500 22:22:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:20.500 22:22:17 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:20.500 22:22:17 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:20.500 22:22:17 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:20.500 22:22:17 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:20.500 22:22:17 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:20.500 22:22:17 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:20.500 22:22:17 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:20.500 22:22:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:20.501 22:22:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.501 22:22:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:20.501 22:22:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:20.501 22:22:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:20.501 22:22:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.501 22:22:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.501 22:22:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.501 22:22:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:20.501 22:22:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:20.501 22:22:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:20.501 22:22:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:20.501 22:22:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:20.501 22:22:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:20.501 22:22:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.760 22:22:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.760 22:22:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:20.760 22:22:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:20.760 22:22:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:20.760 22:22:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:20.760 22:22:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:20.760 22:22:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.760 22:22:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:20.760 22:22:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:20.760 22:22:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:20.760 22:22:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:20.760 22:22:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:20.760 22:22:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:20.760 Cannot find device "nvmf_tgt_br" 00:22:20.760 22:22:17 -- nvmf/common.sh@154 -- # true 00:22:20.760 22:22:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:20.760 Cannot find device "nvmf_tgt_br2" 00:22:20.760 22:22:17 -- nvmf/common.sh@155 -- # true 00:22:20.760 22:22:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:20.760 22:22:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:20.760 Cannot find device "nvmf_tgt_br" 00:22:20.760 22:22:17 -- nvmf/common.sh@157 -- # true 00:22:20.760 22:22:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:20.760 Cannot find device "nvmf_tgt_br2" 00:22:20.760 22:22:17 -- nvmf/common.sh@158 -- # true 00:22:20.760 22:22:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:20.760 22:22:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:20.760 22:22:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:20.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:20.760 22:22:17 -- nvmf/common.sh@161 -- # true 00:22:20.760 22:22:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:20.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:20.760 22:22:17 -- nvmf/common.sh@162 -- # true 00:22:20.760 22:22:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:20.760 22:22:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:20.760 22:22:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:20.760 22:22:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:20.760 22:22:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:20.760 22:22:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:20.760 22:22:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:20.760 22:22:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:20.760 22:22:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:20.760 22:22:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:20.760 22:22:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:20.760 22:22:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:20.760 22:22:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:20.760 22:22:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:20.760 22:22:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:20.760 22:22:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:20.760 22:22:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:20.760 22:22:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:20.760 22:22:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:21.019 22:22:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:21.019 22:22:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:21.019 22:22:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:21.019 22:22:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:21.019 22:22:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:21.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:22:21.019 00:22:21.019 --- 10.0.0.2 ping statistics --- 00:22:21.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.019 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:21.019 22:22:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:21.019 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:21.019 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:22:21.019 00:22:21.019 --- 10.0.0.3 ping statistics --- 00:22:21.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.019 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:21.019 22:22:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:21.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:21.019 00:22:21.019 --- 10.0.0.1 ping statistics --- 00:22:21.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.019 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:21.019 22:22:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.019 22:22:17 -- nvmf/common.sh@421 -- # return 0 00:22:21.019 22:22:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:21.019 22:22:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.019 22:22:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:21.019 22:22:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:21.019 22:22:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.019 22:22:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:21.019 22:22:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:21.019 22:22:17 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:21.019 22:22:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:21.019 22:22:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:21.019 22:22:17 -- common/autotest_common.sh@10 -- # set +x 00:22:21.019 22:22:17 -- nvmf/common.sh@469 -- # nvmfpid=86174 00:22:21.019 22:22:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:21.019 22:22:17 -- nvmf/common.sh@470 -- # waitforlisten 86174 00:22:21.019 22:22:17 -- common/autotest_common.sh@829 -- # '[' -z 86174 ']' 00:22:21.019 22:22:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.019 22:22:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.019 22:22:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.019 22:22:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.019 22:22:17 -- common/autotest_common.sh@10 -- # set +x 00:22:21.019 [2024-11-17 22:22:17.505315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:21.019 [2024-11-17 22:22:17.505831] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.279 [2024-11-17 22:22:17.639890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.279 [2024-11-17 22:22:17.748042] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:21.279 [2024-11-17 22:22:17.748215] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.279 [2024-11-17 22:22:17.748233] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.279 [2024-11-17 22:22:17.748246] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.279 [2024-11-17 22:22:17.748286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.215 22:22:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.215 22:22:18 -- common/autotest_common.sh@862 -- # return 0 00:22:22.215 22:22:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:22.215 22:22:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.215 22:22:18 -- common/autotest_common.sh@10 -- # set +x 00:22:22.215 22:22:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.216 22:22:18 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:22.216 22:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.216 22:22:18 -- common/autotest_common.sh@10 -- # set +x 00:22:22.216 [2024-11-17 22:22:18.580480] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.216 [2024-11-17 22:22:18.588583] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:22.216 null0 00:22:22.216 [2024-11-17 22:22:18.620537] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.216 22:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.216 22:22:18 -- host/discovery_remove_ifc.sh@59 -- # hostpid=86224 00:22:22.216 22:22:18 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:22.216 22:22:18 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 86224 /tmp/host.sock 00:22:22.216 22:22:18 -- common/autotest_common.sh@829 -- # '[' -z 86224 ']' 00:22:22.216 22:22:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:22.216 22:22:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.216 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:22.216 22:22:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:22.216 22:22:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.216 22:22:18 -- common/autotest_common.sh@10 -- # set +x 00:22:22.216 [2024-11-17 22:22:18.703929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:22.216 [2024-11-17 22:22:18.704009] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86224 ] 00:22:22.475 [2024-11-17 22:22:18.840025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.475 [2024-11-17 22:22:18.941970] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:22.475 [2024-11-17 22:22:18.942157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.042 22:22:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.042 22:22:19 -- common/autotest_common.sh@862 -- # return 0 00:22:23.042 22:22:19 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.042 22:22:19 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:23.042 22:22:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.042 22:22:19 -- common/autotest_common.sh@10 -- # set +x 00:22:23.042 22:22:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.042 22:22:19 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:23.042 22:22:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.042 22:22:19 -- common/autotest_common.sh@10 -- # set +x 00:22:23.301 22:22:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.301 22:22:19 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:23.301 22:22:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.301 22:22:19 -- common/autotest_common.sh@10 -- # set +x 00:22:24.236 [2024-11-17 22:22:20.773408] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:24.236 [2024-11-17 22:22:20.773447] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:24.236 [2024-11-17 22:22:20.773464] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:24.494 [2024-11-17 22:22:20.859521] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:24.494 [2024-11-17 22:22:20.915316] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:24.494 [2024-11-17 22:22:20.915365] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:24.494 [2024-11-17 22:22:20.915394] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:24.494 [2024-11-17 22:22:20.915410] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:24.494 [2024-11-17 22:22:20.915430] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:24.494 22:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.494 22:22:20 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:24.494 22:22:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.494 22:22:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.494 22:22:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.494 22:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.494 22:22:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.494 22:22:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.494 22:22:20 -- common/autotest_common.sh@10 -- # set +x 00:22:24.495 [2024-11-17 22:22:20.921987] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x565840 was disconnected and freed. delete nvme_qpair. 00:22:24.495 22:22:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.495 22:22:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:24.495 22:22:20 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:24.495 22:22:20 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:24.495 22:22:20 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:24.495 22:22:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.495 22:22:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.495 22:22:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.495 22:22:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.495 22:22:20 -- common/autotest_common.sh@10 -- # set +x 00:22:24.495 22:22:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.495 22:22:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.495 22:22:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.495 22:22:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:24.495 22:22:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:25.871 22:22:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:25.872 22:22:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.872 22:22:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.872 22:22:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:25.872 22:22:22 -- common/autotest_common.sh@10 -- # set +x 00:22:25.872 22:22:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:25.872 22:22:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:25.872 22:22:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.872 22:22:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:25.872 22:22:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:26.807 22:22:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:26.807 22:22:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.807 22:22:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.807 22:22:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:26.807 22:22:23 -- common/autotest_common.sh@10 -- # set +x 00:22:26.807 22:22:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:26.807 22:22:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:26.807 22:22:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.807 22:22:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:26.807 22:22:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:27.744 22:22:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:27.744 22:22:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.744 22:22:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:27.744 22:22:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.744 22:22:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:27.744 22:22:24 -- common/autotest_common.sh@10 -- # set +x 00:22:27.744 22:22:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:27.744 22:22:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.744 22:22:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:27.744 22:22:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:28.682 22:22:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:28.682 22:22:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.682 22:22:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.682 22:22:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:28.682 22:22:25 -- common/autotest_common.sh@10 -- # set +x 00:22:28.682 22:22:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:28.682 22:22:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:28.682 22:22:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.682 22:22:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:28.682 22:22:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:30.060 22:22:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.060 22:22:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.060 22:22:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.060 22:22:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.060 22:22:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.060 22:22:26 -- common/autotest_common.sh@10 -- # set +x 00:22:30.060 22:22:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.060 22:22:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.060 [2024-11-17 22:22:26.343316] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:30.060 [2024-11-17 22:22:26.343366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.060 [2024-11-17 22:22:26.343381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.060 [2024-11-17 22:22:26.343391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.060 [2024-11-17 22:22:26.343399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.060 [2024-11-17 22:22:26.343408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.060 [2024-11-17 22:22:26.343416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.060 [2024-11-17 22:22:26.343424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.060 [2024-11-17 22:22:26.343432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.060 [2024-11-17 22:22:26.343441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.060 [2024-11-17 22:22:26.343449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.060 [2024-11-17 22:22:26.343456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc9f0 is same with the state(5) to be set 00:22:30.060 22:22:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:30.060 22:22:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:30.060 [2024-11-17 22:22:26.353313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc9f0 (9): Bad file descriptor 00:22:30.060 [2024-11-17 22:22:26.363338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:30.996 22:22:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.996 22:22:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.996 22:22:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.996 22:22:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.996 22:22:27 -- common/autotest_common.sh@10 -- # set +x 00:22:30.996 22:22:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.996 22:22:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.996 [2024-11-17 22:22:27.398861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:31.932 [2024-11-17 22:22:28.422864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:31.932 [2024-11-17 22:22:28.422953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4dc9f0 with addr=10.0.0.2, port=4420 00:22:31.932 [2024-11-17 22:22:28.422985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc9f0 is same with the state(5) to be set 00:22:31.932 [2024-11-17 22:22:28.423030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:31.932 [2024-11-17 22:22:28.423052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:31.932 [2024-11-17 22:22:28.423071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:31.932 [2024-11-17 22:22:28.423092] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:31.932 [2024-11-17 22:22:28.423875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc9f0 (9): Bad file descriptor 00:22:31.932 [2024-11-17 22:22:28.423946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.932 [2024-11-17 22:22:28.423999] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:31.932 [2024-11-17 22:22:28.424065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.932 [2024-11-17 22:22:28.424095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.932 [2024-11-17 22:22:28.424120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.932 [2024-11-17 22:22:28.424141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.932 [2024-11-17 22:22:28.424163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.932 [2024-11-17 22:22:28.424183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.932 [2024-11-17 22:22:28.424204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.932 [2024-11-17 22:22:28.424224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.932 [2024-11-17 22:22:28.424245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.932 [2024-11-17 22:22:28.424265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.932 [2024-11-17 22:22:28.424285] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:31.932 [2024-11-17 22:22:28.424363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dce00 (9): Bad file descriptor 00:22:31.932 [2024-11-17 22:22:28.425344] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:31.932 [2024-11-17 22:22:28.425376] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:31.932 22:22:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.932 22:22:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:31.932 22:22:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:32.868 22:22:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.868 22:22:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.868 22:22:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.868 22:22:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.868 22:22:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.868 22:22:29 -- common/autotest_common.sh@10 -- # set +x 00:22:32.868 22:22:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.868 22:22:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.126 22:22:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:33.126 22:22:29 -- common/autotest_common.sh@10 -- # set +x 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:33.126 22:22:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:33.126 22:22:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:34.061 [2024-11-17 22:22:30.434864] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:34.061 [2024-11-17 22:22:30.434892] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:34.061 [2024-11-17 22:22:30.434910] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:34.061 [2024-11-17 22:22:30.520974] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:34.061 [2024-11-17 22:22:30.576148] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:34.061 [2024-11-17 22:22:30.576223] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:34.061 [2024-11-17 22:22:30.576245] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:34.061 [2024-11-17 22:22:30.576260] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:34.061 [2024-11-17 22:22:30.576268] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:34.061 22:22:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.061 22:22:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.061 22:22:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.061 22:22:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.061 22:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.061 22:22:30 -- common/autotest_common.sh@10 -- # set +x 00:22:34.061 22:22:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.061 [2024-11-17 22:22:30.583488] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x520080 was disconnected and freed. delete nvme_qpair. 00:22:34.061 22:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.061 22:22:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:34.061 22:22:30 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:34.061 22:22:30 -- host/discovery_remove_ifc.sh@90 -- # killprocess 86224 00:22:34.061 22:22:30 -- common/autotest_common.sh@936 -- # '[' -z 86224 ']' 00:22:34.061 22:22:30 -- common/autotest_common.sh@940 -- # kill -0 86224 00:22:34.061 22:22:30 -- common/autotest_common.sh@941 -- # uname 00:22:34.061 22:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:34.061 22:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86224 00:22:34.061 killing process with pid 86224 00:22:34.061 22:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:34.061 22:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:34.061 22:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86224' 00:22:34.061 22:22:30 -- common/autotest_common.sh@955 -- # kill 86224 00:22:34.061 22:22:30 -- common/autotest_common.sh@960 -- # wait 86224 00:22:34.320 22:22:30 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:34.320 22:22:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:34.320 22:22:30 -- nvmf/common.sh@116 -- # sync 00:22:34.578 22:22:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:34.578 22:22:30 -- nvmf/common.sh@119 -- # set +e 00:22:34.578 22:22:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:34.578 22:22:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:34.578 rmmod nvme_tcp 00:22:34.578 rmmod nvme_fabrics 00:22:34.578 rmmod nvme_keyring 00:22:34.578 22:22:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:34.578 22:22:30 -- nvmf/common.sh@123 -- # set -e 00:22:34.578 22:22:30 -- nvmf/common.sh@124 -- # return 0 00:22:34.578 22:22:30 -- nvmf/common.sh@477 -- # '[' -n 86174 ']' 00:22:34.578 22:22:30 -- nvmf/common.sh@478 -- # killprocess 86174 00:22:34.578 22:22:30 -- common/autotest_common.sh@936 -- # '[' -z 86174 ']' 00:22:34.578 22:22:30 -- common/autotest_common.sh@940 -- # kill -0 86174 00:22:34.578 22:22:30 -- common/autotest_common.sh@941 -- # uname 00:22:34.578 22:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:34.578 22:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86174 00:22:34.578 killing process with pid 86174 00:22:34.578 22:22:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:34.578 22:22:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:34.578 22:22:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86174' 00:22:34.578 22:22:31 -- common/autotest_common.sh@955 -- # kill 86174 00:22:34.578 22:22:31 -- common/autotest_common.sh@960 -- # wait 86174 00:22:34.837 22:22:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:34.837 22:22:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:34.837 22:22:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:34.837 22:22:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.837 22:22:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:34.837 22:22:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.837 22:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.837 22:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.837 22:22:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:34.837 00:22:34.837 real 0m14.350s 00:22:34.837 user 0m24.585s 00:22:34.837 sys 0m1.560s 00:22:34.837 ************************************ 00:22:34.837 END TEST nvmf_discovery_remove_ifc 00:22:34.837 ************************************ 00:22:34.837 22:22:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:34.837 22:22:31 -- common/autotest_common.sh@10 -- # set +x 00:22:34.837 22:22:31 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:34.837 22:22:31 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:34.837 22:22:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:34.837 22:22:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:34.837 22:22:31 -- common/autotest_common.sh@10 -- # set +x 00:22:34.837 ************************************ 00:22:34.837 START TEST nvmf_digest 00:22:34.837 ************************************ 00:22:34.837 22:22:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:34.837 * Looking for test storage... 00:22:34.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:34.837 22:22:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:34.837 22:22:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:34.837 22:22:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:35.096 22:22:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:35.096 22:22:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:35.096 22:22:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:35.096 22:22:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:35.096 22:22:31 -- scripts/common.sh@335 -- # IFS=.-: 00:22:35.096 22:22:31 -- scripts/common.sh@335 -- # read -ra ver1 00:22:35.097 22:22:31 -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.097 22:22:31 -- scripts/common.sh@336 -- # read -ra ver2 00:22:35.097 22:22:31 -- scripts/common.sh@337 -- # local 'op=<' 00:22:35.097 22:22:31 -- scripts/common.sh@339 -- # ver1_l=2 00:22:35.097 22:22:31 -- scripts/common.sh@340 -- # ver2_l=1 00:22:35.097 22:22:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:35.097 22:22:31 -- scripts/common.sh@343 -- # case "$op" in 00:22:35.097 22:22:31 -- scripts/common.sh@344 -- # : 1 00:22:35.097 22:22:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:35.097 22:22:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.097 22:22:31 -- scripts/common.sh@364 -- # decimal 1 00:22:35.097 22:22:31 -- scripts/common.sh@352 -- # local d=1 00:22:35.097 22:22:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.097 22:22:31 -- scripts/common.sh@354 -- # echo 1 00:22:35.097 22:22:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:35.097 22:22:31 -- scripts/common.sh@365 -- # decimal 2 00:22:35.097 22:22:31 -- scripts/common.sh@352 -- # local d=2 00:22:35.097 22:22:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.097 22:22:31 -- scripts/common.sh@354 -- # echo 2 00:22:35.097 22:22:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:35.097 22:22:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:35.097 22:22:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:35.097 22:22:31 -- scripts/common.sh@367 -- # return 0 00:22:35.097 22:22:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.097 22:22:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:35.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.097 --rc genhtml_branch_coverage=1 00:22:35.097 --rc genhtml_function_coverage=1 00:22:35.097 --rc genhtml_legend=1 00:22:35.097 --rc geninfo_all_blocks=1 00:22:35.097 --rc geninfo_unexecuted_blocks=1 00:22:35.097 00:22:35.097 ' 00:22:35.097 22:22:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:35.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.097 --rc genhtml_branch_coverage=1 00:22:35.097 --rc genhtml_function_coverage=1 00:22:35.097 --rc genhtml_legend=1 00:22:35.097 --rc geninfo_all_blocks=1 00:22:35.097 --rc geninfo_unexecuted_blocks=1 00:22:35.097 00:22:35.097 ' 00:22:35.097 22:22:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:35.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.097 --rc genhtml_branch_coverage=1 00:22:35.097 --rc genhtml_function_coverage=1 00:22:35.097 --rc genhtml_legend=1 00:22:35.097 --rc geninfo_all_blocks=1 00:22:35.097 --rc geninfo_unexecuted_blocks=1 00:22:35.097 00:22:35.097 ' 00:22:35.097 22:22:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:35.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.097 --rc genhtml_branch_coverage=1 00:22:35.097 --rc genhtml_function_coverage=1 00:22:35.097 --rc genhtml_legend=1 00:22:35.097 --rc geninfo_all_blocks=1 00:22:35.097 --rc geninfo_unexecuted_blocks=1 00:22:35.097 00:22:35.097 ' 00:22:35.097 22:22:31 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:35.097 22:22:31 -- nvmf/common.sh@7 -- # uname -s 00:22:35.097 22:22:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.097 22:22:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.097 22:22:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.097 22:22:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.097 22:22:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.097 22:22:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.097 22:22:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.097 22:22:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.097 22:22:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.097 22:22:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.097 22:22:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:22:35.097 22:22:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:22:35.097 22:22:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.097 22:22:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.097 22:22:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:35.097 22:22:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:35.097 22:22:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.097 22:22:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.097 22:22:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.097 22:22:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.097 22:22:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.097 22:22:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.097 22:22:31 -- paths/export.sh@5 -- # export PATH 00:22:35.097 22:22:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.097 22:22:31 -- nvmf/common.sh@46 -- # : 0 00:22:35.097 22:22:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:35.097 22:22:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:35.097 22:22:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:35.097 22:22:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.097 22:22:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.097 22:22:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:35.097 22:22:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:35.097 22:22:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:35.097 22:22:31 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:35.097 22:22:31 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:35.097 22:22:31 -- host/digest.sh@16 -- # runtime=2 00:22:35.097 22:22:31 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:35.097 22:22:31 -- host/digest.sh@132 -- # nvmftestinit 00:22:35.097 22:22:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:35.097 22:22:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.097 22:22:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:35.097 22:22:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:35.097 22:22:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:35.097 22:22:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.097 22:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.097 22:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.097 22:22:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:35.097 22:22:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:35.097 22:22:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:35.097 22:22:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:35.097 22:22:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:35.097 22:22:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:35.097 22:22:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.097 22:22:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.097 22:22:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:35.097 22:22:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:35.097 22:22:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:35.097 22:22:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:35.097 22:22:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:35.097 22:22:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.097 22:22:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:35.097 22:22:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:35.097 22:22:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:35.097 22:22:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:35.097 22:22:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:35.097 22:22:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:35.097 Cannot find device "nvmf_tgt_br" 00:22:35.097 22:22:31 -- nvmf/common.sh@154 -- # true 00:22:35.097 22:22:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.097 Cannot find device "nvmf_tgt_br2" 00:22:35.098 22:22:31 -- nvmf/common.sh@155 -- # true 00:22:35.098 22:22:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:35.098 22:22:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:35.098 Cannot find device "nvmf_tgt_br" 00:22:35.098 22:22:31 -- nvmf/common.sh@157 -- # true 00:22:35.098 22:22:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:35.098 Cannot find device "nvmf_tgt_br2" 00:22:35.098 22:22:31 -- nvmf/common.sh@158 -- # true 00:22:35.098 22:22:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:35.098 22:22:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:35.098 22:22:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.098 22:22:31 -- nvmf/common.sh@161 -- # true 00:22:35.098 22:22:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.098 22:22:31 -- nvmf/common.sh@162 -- # true 00:22:35.098 22:22:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:35.098 22:22:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:35.098 22:22:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:35.098 22:22:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:35.359 22:22:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:35.359 22:22:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:35.359 22:22:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:35.359 22:22:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:35.359 22:22:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:35.359 22:22:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:35.359 22:22:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:35.359 22:22:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:35.359 22:22:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:35.359 22:22:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:35.359 22:22:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:35.359 22:22:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:35.359 22:22:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:35.359 22:22:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:35.359 22:22:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:35.359 22:22:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:35.359 22:22:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:35.359 22:22:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:35.359 22:22:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:35.359 22:22:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:35.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:22:35.359 00:22:35.359 --- 10.0.0.2 ping statistics --- 00:22:35.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.359 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:35.359 22:22:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:35.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:35.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:22:35.359 00:22:35.359 --- 10.0.0.3 ping statistics --- 00:22:35.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.359 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:35.359 22:22:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:35.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:35.359 00:22:35.359 --- 10.0.0.1 ping statistics --- 00:22:35.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.359 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:35.359 22:22:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.359 22:22:31 -- nvmf/common.sh@421 -- # return 0 00:22:35.359 22:22:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:35.359 22:22:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.359 22:22:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:35.359 22:22:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:35.359 22:22:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.359 22:22:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:35.359 22:22:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:35.359 22:22:31 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:35.359 22:22:31 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:35.359 22:22:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:35.359 22:22:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:35.359 22:22:31 -- common/autotest_common.sh@10 -- # set +x 00:22:35.359 ************************************ 00:22:35.359 START TEST nvmf_digest_clean 00:22:35.359 ************************************ 00:22:35.359 22:22:31 -- common/autotest_common.sh@1114 -- # run_digest 00:22:35.359 22:22:31 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:35.359 22:22:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:35.359 22:22:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.359 22:22:31 -- common/autotest_common.sh@10 -- # set +x 00:22:35.359 22:22:31 -- nvmf/common.sh@469 -- # nvmfpid=86645 00:22:35.359 22:22:31 -- nvmf/common.sh@470 -- # waitforlisten 86645 00:22:35.359 22:22:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:35.359 22:22:31 -- common/autotest_common.sh@829 -- # '[' -z 86645 ']' 00:22:35.359 22:22:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.359 22:22:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.359 22:22:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.359 22:22:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.359 22:22:31 -- common/autotest_common.sh@10 -- # set +x 00:22:35.635 [2024-11-17 22:22:31.996258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:35.635 [2024-11-17 22:22:31.997345] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.635 [2024-11-17 22:22:32.152629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.921 [2024-11-17 22:22:32.259810] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:35.921 [2024-11-17 22:22:32.260280] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.921 [2024-11-17 22:22:32.260309] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.921 [2024-11-17 22:22:32.260321] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.921 [2024-11-17 22:22:32.260361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.499 22:22:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.499 22:22:33 -- common/autotest_common.sh@862 -- # return 0 00:22:36.499 22:22:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:36.499 22:22:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.499 22:22:33 -- common/autotest_common.sh@10 -- # set +x 00:22:36.499 22:22:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.499 22:22:33 -- host/digest.sh@120 -- # common_target_config 00:22:36.499 22:22:33 -- host/digest.sh@43 -- # rpc_cmd 00:22:36.499 22:22:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.499 22:22:33 -- common/autotest_common.sh@10 -- # set +x 00:22:36.758 null0 00:22:36.758 [2024-11-17 22:22:33.186421] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.758 [2024-11-17 22:22:33.210555] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.758 22:22:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.758 22:22:33 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:36.758 22:22:33 -- host/digest.sh@77 -- # local rw bs qd 00:22:36.758 22:22:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:36.758 22:22:33 -- host/digest.sh@80 -- # rw=randread 00:22:36.758 22:22:33 -- host/digest.sh@80 -- # bs=4096 00:22:36.758 22:22:33 -- host/digest.sh@80 -- # qd=128 00:22:36.758 22:22:33 -- host/digest.sh@82 -- # bperfpid=86701 00:22:36.758 22:22:33 -- host/digest.sh@83 -- # waitforlisten 86701 /var/tmp/bperf.sock 00:22:36.758 22:22:33 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:36.758 22:22:33 -- common/autotest_common.sh@829 -- # '[' -z 86701 ']' 00:22:36.758 22:22:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:36.758 22:22:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.758 22:22:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:36.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:36.758 22:22:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.758 22:22:33 -- common/autotest_common.sh@10 -- # set +x 00:22:36.758 [2024-11-17 22:22:33.273229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:36.758 [2024-11-17 22:22:33.273653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86701 ] 00:22:37.017 [2024-11-17 22:22:33.416706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.017 [2024-11-17 22:22:33.516047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.954 22:22:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.954 22:22:34 -- common/autotest_common.sh@862 -- # return 0 00:22:37.954 22:22:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:37.954 22:22:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:37.954 22:22:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:38.212 22:22:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:38.212 22:22:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:38.472 nvme0n1 00:22:38.472 22:22:34 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:38.472 22:22:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:38.472 Running I/O for 2 seconds... 00:22:40.376 00:22:40.376 Latency(us) 00:22:40.376 [2024-11-17T22:22:36.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.376 [2024-11-17T22:22:36.991Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:40.376 nvme0n1 : 2.00 23909.03 93.39 0.00 0.00 5349.18 2368.23 12034.79 00:22:40.376 [2024-11-17T22:22:36.991Z] =================================================================================================================== 00:22:40.376 [2024-11-17T22:22:36.991Z] Total : 23909.03 93.39 0.00 0.00 5349.18 2368.23 12034.79 00:22:40.376 0 00:22:40.376 22:22:36 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:40.376 22:22:36 -- host/digest.sh@92 -- # get_accel_stats 00:22:40.376 22:22:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:40.376 22:22:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:40.376 | select(.opcode=="crc32c") 00:22:40.376 | "\(.module_name) \(.executed)"' 00:22:40.376 22:22:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:40.635 22:22:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:40.635 22:22:37 -- host/digest.sh@93 -- # exp_module=software 00:22:40.635 22:22:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:40.635 22:22:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:40.635 22:22:37 -- host/digest.sh@97 -- # killprocess 86701 00:22:40.635 22:22:37 -- common/autotest_common.sh@936 -- # '[' -z 86701 ']' 00:22:40.635 22:22:37 -- common/autotest_common.sh@940 -- # kill -0 86701 00:22:40.635 22:22:37 -- common/autotest_common.sh@941 -- # uname 00:22:40.635 22:22:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:40.635 22:22:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86701 00:22:40.635 22:22:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:40.635 22:22:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:40.635 killing process with pid 86701 00:22:40.635 22:22:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86701' 00:22:40.635 Received shutdown signal, test time was about 2.000000 seconds 00:22:40.635 00:22:40.635 Latency(us) 00:22:40.635 [2024-11-17T22:22:37.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.635 [2024-11-17T22:22:37.250Z] =================================================================================================================== 00:22:40.635 [2024-11-17T22:22:37.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.635 22:22:37 -- common/autotest_common.sh@955 -- # kill 86701 00:22:40.635 22:22:37 -- common/autotest_common.sh@960 -- # wait 86701 00:22:41.203 22:22:37 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:41.203 22:22:37 -- host/digest.sh@77 -- # local rw bs qd 00:22:41.203 22:22:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:41.203 22:22:37 -- host/digest.sh@80 -- # rw=randread 00:22:41.203 22:22:37 -- host/digest.sh@80 -- # bs=131072 00:22:41.203 22:22:37 -- host/digest.sh@80 -- # qd=16 00:22:41.203 22:22:37 -- host/digest.sh@82 -- # bperfpid=86790 00:22:41.203 22:22:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:41.203 22:22:37 -- host/digest.sh@83 -- # waitforlisten 86790 /var/tmp/bperf.sock 00:22:41.203 22:22:37 -- common/autotest_common.sh@829 -- # '[' -z 86790 ']' 00:22:41.203 22:22:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:41.203 22:22:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:41.203 22:22:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:41.203 22:22:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.203 22:22:37 -- common/autotest_common.sh@10 -- # set +x 00:22:41.203 [2024-11-17 22:22:37.593695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:41.203 [2024-11-17 22:22:37.593811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86790 ] 00:22:41.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:41.203 Zero copy mechanism will not be used. 00:22:41.203 [2024-11-17 22:22:37.724788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.203 [2024-11-17 22:22:37.811415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.140 22:22:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.140 22:22:38 -- common/autotest_common.sh@862 -- # return 0 00:22:42.140 22:22:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:42.140 22:22:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:42.140 22:22:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:42.400 22:22:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:42.400 22:22:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:42.671 nvme0n1 00:22:42.671 22:22:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:42.671 22:22:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:42.671 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:42.671 Zero copy mechanism will not be used. 00:22:42.671 Running I/O for 2 seconds... 00:22:45.208 00:22:45.208 Latency(us) 00:22:45.208 [2024-11-17T22:22:41.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.208 [2024-11-17T22:22:41.823Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:45.208 nvme0n1 : 2.00 9154.13 1144.27 0.00 0.00 1745.14 565.99 11141.12 00:22:45.208 [2024-11-17T22:22:41.823Z] =================================================================================================================== 00:22:45.208 [2024-11-17T22:22:41.823Z] Total : 9154.13 1144.27 0.00 0.00 1745.14 565.99 11141.12 00:22:45.208 0 00:22:45.208 22:22:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:45.208 22:22:41 -- host/digest.sh@92 -- # get_accel_stats 00:22:45.208 22:22:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:45.208 22:22:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:45.208 | select(.opcode=="crc32c") 00:22:45.208 | "\(.module_name) \(.executed)"' 00:22:45.208 22:22:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:45.208 22:22:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:45.208 22:22:41 -- host/digest.sh@93 -- # exp_module=software 00:22:45.208 22:22:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:45.208 22:22:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:45.208 22:22:41 -- host/digest.sh@97 -- # killprocess 86790 00:22:45.208 22:22:41 -- common/autotest_common.sh@936 -- # '[' -z 86790 ']' 00:22:45.208 22:22:41 -- common/autotest_common.sh@940 -- # kill -0 86790 00:22:45.208 22:22:41 -- common/autotest_common.sh@941 -- # uname 00:22:45.208 22:22:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:45.208 22:22:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86790 00:22:45.208 22:22:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:45.208 22:22:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:45.208 22:22:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86790' 00:22:45.208 killing process with pid 86790 00:22:45.208 22:22:41 -- common/autotest_common.sh@955 -- # kill 86790 00:22:45.208 Received shutdown signal, test time was about 2.000000 seconds 00:22:45.208 00:22:45.208 Latency(us) 00:22:45.208 [2024-11-17T22:22:41.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.208 [2024-11-17T22:22:41.823Z] =================================================================================================================== 00:22:45.208 [2024-11-17T22:22:41.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.208 22:22:41 -- common/autotest_common.sh@960 -- # wait 86790 00:22:45.468 22:22:41 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:45.468 22:22:41 -- host/digest.sh@77 -- # local rw bs qd 00:22:45.468 22:22:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:45.468 22:22:41 -- host/digest.sh@80 -- # rw=randwrite 00:22:45.468 22:22:41 -- host/digest.sh@80 -- # bs=4096 00:22:45.468 22:22:41 -- host/digest.sh@80 -- # qd=128 00:22:45.468 22:22:41 -- host/digest.sh@82 -- # bperfpid=86876 00:22:45.468 22:22:41 -- host/digest.sh@83 -- # waitforlisten 86876 /var/tmp/bperf.sock 00:22:45.468 22:22:41 -- common/autotest_common.sh@829 -- # '[' -z 86876 ']' 00:22:45.468 22:22:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:45.468 22:22:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:45.468 22:22:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:45.468 22:22:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.468 22:22:41 -- common/autotest_common.sh@10 -- # set +x 00:22:45.468 22:22:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:45.468 [2024-11-17 22:22:41.925614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:45.468 [2024-11-17 22:22:41.925717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86876 ] 00:22:45.468 [2024-11-17 22:22:42.063344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.728 [2024-11-17 22:22:42.144515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.296 22:22:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.296 22:22:42 -- common/autotest_common.sh@862 -- # return 0 00:22:46.296 22:22:42 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:46.296 22:22:42 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:46.296 22:22:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:46.863 22:22:43 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:46.863 22:22:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:47.122 nvme0n1 00:22:47.122 22:22:43 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:47.122 22:22:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:47.122 Running I/O for 2 seconds... 00:22:49.715 00:22:49.715 Latency(us) 00:22:49.715 [2024-11-17T22:22:46.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.715 [2024-11-17T22:22:46.330Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:49.715 nvme0n1 : 2.00 28904.93 112.91 0.00 0.00 4424.28 1846.92 9234.62 00:22:49.715 [2024-11-17T22:22:46.330Z] =================================================================================================================== 00:22:49.715 [2024-11-17T22:22:46.330Z] Total : 28904.93 112.91 0.00 0.00 4424.28 1846.92 9234.62 00:22:49.715 0 00:22:49.715 22:22:45 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:49.715 22:22:45 -- host/digest.sh@92 -- # get_accel_stats 00:22:49.715 22:22:45 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:49.715 22:22:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:49.715 22:22:45 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:49.715 | select(.opcode=="crc32c") 00:22:49.715 | "\(.module_name) \(.executed)"' 00:22:49.715 22:22:45 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:49.715 22:22:45 -- host/digest.sh@93 -- # exp_module=software 00:22:49.715 22:22:45 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:49.715 22:22:45 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:49.715 22:22:45 -- host/digest.sh@97 -- # killprocess 86876 00:22:49.715 22:22:45 -- common/autotest_common.sh@936 -- # '[' -z 86876 ']' 00:22:49.715 22:22:45 -- common/autotest_common.sh@940 -- # kill -0 86876 00:22:49.715 22:22:45 -- common/autotest_common.sh@941 -- # uname 00:22:49.715 22:22:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:49.715 22:22:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86876 00:22:49.715 22:22:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:49.715 22:22:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:49.715 killing process with pid 86876 00:22:49.715 22:22:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86876' 00:22:49.715 Received shutdown signal, test time was about 2.000000 seconds 00:22:49.715 00:22:49.715 Latency(us) 00:22:49.715 [2024-11-17T22:22:46.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.715 [2024-11-17T22:22:46.330Z] =================================================================================================================== 00:22:49.715 [2024-11-17T22:22:46.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:49.715 22:22:45 -- common/autotest_common.sh@955 -- # kill 86876 00:22:49.715 22:22:45 -- common/autotest_common.sh@960 -- # wait 86876 00:22:49.715 22:22:46 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:49.715 22:22:46 -- host/digest.sh@77 -- # local rw bs qd 00:22:49.715 22:22:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:49.715 22:22:46 -- host/digest.sh@80 -- # rw=randwrite 00:22:49.715 22:22:46 -- host/digest.sh@80 -- # bs=131072 00:22:49.715 22:22:46 -- host/digest.sh@80 -- # qd=16 00:22:49.715 22:22:46 -- host/digest.sh@82 -- # bperfpid=86967 00:22:49.715 22:22:46 -- host/digest.sh@83 -- # waitforlisten 86967 /var/tmp/bperf.sock 00:22:49.715 22:22:46 -- common/autotest_common.sh@829 -- # '[' -z 86967 ']' 00:22:49.715 22:22:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:49.715 22:22:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:49.715 22:22:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:49.715 22:22:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.715 22:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:49.715 22:22:46 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:49.715 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:49.715 Zero copy mechanism will not be used. 00:22:49.715 [2024-11-17 22:22:46.300880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:49.715 [2024-11-17 22:22:46.300990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86967 ] 00:22:49.973 [2024-11-17 22:22:46.439320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.973 [2024-11-17 22:22:46.517946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.906 22:22:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.906 22:22:47 -- common/autotest_common.sh@862 -- # return 0 00:22:50.906 22:22:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:50.906 22:22:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:50.906 22:22:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:51.166 22:22:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:51.166 22:22:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:51.425 nvme0n1 00:22:51.425 22:22:47 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:51.425 22:22:47 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:51.425 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:51.425 Zero copy mechanism will not be used. 00:22:51.425 Running I/O for 2 seconds... 00:22:53.956 00:22:53.956 Latency(us) 00:22:53.956 [2024-11-17T22:22:50.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.956 [2024-11-17T22:22:50.571Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:53.956 nvme0n1 : 2.00 7983.28 997.91 0.00 0.00 1999.80 1683.08 4408.79 00:22:53.956 [2024-11-17T22:22:50.571Z] =================================================================================================================== 00:22:53.956 [2024-11-17T22:22:50.571Z] Total : 7983.28 997.91 0.00 0.00 1999.80 1683.08 4408.79 00:22:53.956 0 00:22:53.956 22:22:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:53.956 22:22:50 -- host/digest.sh@92 -- # get_accel_stats 00:22:53.956 22:22:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:53.956 22:22:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:53.956 22:22:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:53.956 | select(.opcode=="crc32c") 00:22:53.956 | "\(.module_name) \(.executed)"' 00:22:53.956 22:22:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:53.956 22:22:50 -- host/digest.sh@93 -- # exp_module=software 00:22:53.956 22:22:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:53.957 22:22:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:53.957 22:22:50 -- host/digest.sh@97 -- # killprocess 86967 00:22:53.957 22:22:50 -- common/autotest_common.sh@936 -- # '[' -z 86967 ']' 00:22:53.957 22:22:50 -- common/autotest_common.sh@940 -- # kill -0 86967 00:22:53.957 22:22:50 -- common/autotest_common.sh@941 -- # uname 00:22:53.957 22:22:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.957 22:22:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86967 00:22:53.957 killing process with pid 86967 00:22:53.957 Received shutdown signal, test time was about 2.000000 seconds 00:22:53.957 00:22:53.957 Latency(us) 00:22:53.957 [2024-11-17T22:22:50.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.957 [2024-11-17T22:22:50.572Z] =================================================================================================================== 00:22:53.957 [2024-11-17T22:22:50.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.957 22:22:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:53.957 22:22:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:53.957 22:22:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86967' 00:22:53.957 22:22:50 -- common/autotest_common.sh@955 -- # kill 86967 00:22:53.957 22:22:50 -- common/autotest_common.sh@960 -- # wait 86967 00:22:54.214 22:22:50 -- host/digest.sh@126 -- # killprocess 86645 00:22:54.214 22:22:50 -- common/autotest_common.sh@936 -- # '[' -z 86645 ']' 00:22:54.214 22:22:50 -- common/autotest_common.sh@940 -- # kill -0 86645 00:22:54.214 22:22:50 -- common/autotest_common.sh@941 -- # uname 00:22:54.214 22:22:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.214 22:22:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86645 00:22:54.214 killing process with pid 86645 00:22:54.214 22:22:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:54.214 22:22:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:54.214 22:22:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86645' 00:22:54.214 22:22:50 -- common/autotest_common.sh@955 -- # kill 86645 00:22:54.214 22:22:50 -- common/autotest_common.sh@960 -- # wait 86645 00:22:54.473 ************************************ 00:22:54.473 END TEST nvmf_digest_clean 00:22:54.473 ************************************ 00:22:54.473 00:22:54.473 real 0m18.951s 00:22:54.473 user 0m34.640s 00:22:54.473 sys 0m5.559s 00:22:54.473 22:22:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:54.473 22:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:54.473 22:22:50 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:54.473 22:22:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:54.473 22:22:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:54.473 22:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:54.473 ************************************ 00:22:54.473 START TEST nvmf_digest_error 00:22:54.473 ************************************ 00:22:54.473 22:22:50 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:54.473 22:22:50 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:54.473 22:22:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:54.473 22:22:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.473 22:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:54.473 22:22:50 -- nvmf/common.sh@469 -- # nvmfpid=87086 00:22:54.473 22:22:50 -- nvmf/common.sh@470 -- # waitforlisten 87086 00:22:54.473 22:22:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:54.473 22:22:50 -- common/autotest_common.sh@829 -- # '[' -z 87086 ']' 00:22:54.473 22:22:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.473 22:22:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.473 22:22:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.473 22:22:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.473 22:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:54.473 [2024-11-17 22:22:50.973911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:54.473 [2024-11-17 22:22:50.974016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.732 [2024-11-17 22:22:51.113480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.732 [2024-11-17 22:22:51.188841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:54.732 [2024-11-17 22:22:51.188974] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.732 [2024-11-17 22:22:51.188986] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.732 [2024-11-17 22:22:51.188995] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.732 [2024-11-17 22:22:51.189026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.666 22:22:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.666 22:22:51 -- common/autotest_common.sh@862 -- # return 0 00:22:55.666 22:22:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:55.666 22:22:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.666 22:22:51 -- common/autotest_common.sh@10 -- # set +x 00:22:55.666 22:22:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.666 22:22:51 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:55.666 22:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.666 22:22:51 -- common/autotest_common.sh@10 -- # set +x 00:22:55.666 [2024-11-17 22:22:51.965501] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:55.666 22:22:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.666 22:22:51 -- host/digest.sh@104 -- # common_target_config 00:22:55.666 22:22:51 -- host/digest.sh@43 -- # rpc_cmd 00:22:55.666 22:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.666 22:22:51 -- common/autotest_common.sh@10 -- # set +x 00:22:55.666 null0 00:22:55.666 [2024-11-17 22:22:52.070309] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.666 [2024-11-17 22:22:52.094454] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.666 22:22:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.666 22:22:52 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:55.666 22:22:52 -- host/digest.sh@54 -- # local rw bs qd 00:22:55.666 22:22:52 -- host/digest.sh@56 -- # rw=randread 00:22:55.666 22:22:52 -- host/digest.sh@56 -- # bs=4096 00:22:55.666 22:22:52 -- host/digest.sh@56 -- # qd=128 00:22:55.666 22:22:52 -- host/digest.sh@58 -- # bperfpid=87130 00:22:55.666 22:22:52 -- host/digest.sh@60 -- # waitforlisten 87130 /var/tmp/bperf.sock 00:22:55.666 22:22:52 -- common/autotest_common.sh@829 -- # '[' -z 87130 ']' 00:22:55.666 22:22:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:55.666 22:22:52 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:55.666 22:22:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:55.667 22:22:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:55.667 22:22:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.667 22:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:55.667 [2024-11-17 22:22:52.146209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:55.667 [2024-11-17 22:22:52.146274] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87130 ] 00:22:55.926 [2024-11-17 22:22:52.281960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.926 [2024-11-17 22:22:52.388958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.493 22:22:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.493 22:22:53 -- common/autotest_common.sh@862 -- # return 0 00:22:56.493 22:22:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.493 22:22:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.752 22:22:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:56.752 22:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.752 22:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:56.752 22:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.752 22:22:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.752 22:22:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.010 nvme0n1 00:22:57.010 22:22:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:57.010 22:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.010 22:22:53 -- common/autotest_common.sh@10 -- # set +x 00:22:57.010 22:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.010 22:22:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:57.010 22:22:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.270 Running I/O for 2 seconds... 00:22:57.270 [2024-11-17 22:22:53.704770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.704820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.704835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.714450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.714481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.714493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.724306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.724337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.724349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.733325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.733356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.733367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.745399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.745447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.745459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.757743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.757789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.757800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.769722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.769778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.769790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.780570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.780617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.780643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.792150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.792180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.792192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.804166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.804197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.804208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.816141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.816172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.816183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.824545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.824577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.824587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.836459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.836490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.836502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.847530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.847561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.847572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.859873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.859905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.859916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.869300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.869330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.869341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.270 [2024-11-17 22:22:53.880622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.270 [2024-11-17 22:22:53.880672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.270 [2024-11-17 22:22:53.880684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.890989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.891035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.891047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.899126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.899173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.899184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.909952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.910006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.910051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.920640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.920671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.920681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.930564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.930610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.930622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.939979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.940009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.940019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.952368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.952398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.952409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.964832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.964862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.964873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.976136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.976167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.976177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.987490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.987520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.987532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:53.999729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:53.999770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:53.999781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:54.009032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:54.009063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:54.009073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:54.020661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:54.020691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:54.020702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:54.029794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:54.029824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:54.029834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:54.037863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:54.037892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:54.037902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:54.049869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:54.049899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:54.049910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:54.061319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.530 [2024-11-17 22:22:54.061351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.530 [2024-11-17 22:22:54.061363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.530 [2024-11-17 22:22:54.072845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.531 [2024-11-17 22:22:54.072875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.531 [2024-11-17 22:22:54.072886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.531 [2024-11-17 22:22:54.082168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.531 [2024-11-17 22:22:54.082215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.531 [2024-11-17 22:22:54.082227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.531 [2024-11-17 22:22:54.092328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.531 [2024-11-17 22:22:54.092359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.531 [2024-11-17 22:22:54.092370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.531 [2024-11-17 22:22:54.101223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.531 [2024-11-17 22:22:54.101253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.531 [2024-11-17 22:22:54.101264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.531 [2024-11-17 22:22:54.113715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.531 [2024-11-17 22:22:54.113756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.531 [2024-11-17 22:22:54.113767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.531 [2024-11-17 22:22:54.125970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.531 [2024-11-17 22:22:54.126007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.531 [2024-11-17 22:22:54.126035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.531 [2024-11-17 22:22:54.138063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.531 [2024-11-17 22:22:54.138125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.531 [2024-11-17 22:22:54.138138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.790 [2024-11-17 22:22:54.150242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.790 [2024-11-17 22:22:54.150291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.790 [2024-11-17 22:22:54.150303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.790 [2024-11-17 22:22:54.160346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.790 [2024-11-17 22:22:54.160378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.790 [2024-11-17 22:22:54.160388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.790 [2024-11-17 22:22:54.170483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.790 [2024-11-17 22:22:54.170514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.790 [2024-11-17 22:22:54.170524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.790 [2024-11-17 22:22:54.181783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.790 [2024-11-17 22:22:54.181813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.790 [2024-11-17 22:22:54.181824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.790 [2024-11-17 22:22:54.193567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.790 [2024-11-17 22:22:54.193598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.790 [2024-11-17 22:22:54.193609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.790 [2024-11-17 22:22:54.205324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.790 [2024-11-17 22:22:54.205354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.205365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.214965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.214995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.215006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.227666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.227697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.227708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.238328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.238374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.238385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.248732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.248772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.248782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.261398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.261429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.261439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.273025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.273056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.273066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.281433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.281463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.281474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.292610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.292641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.292652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.303287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.303317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.303328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.313037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.313068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.313079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.321978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.322042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.322054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.331580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.331611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.331622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.341892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.341922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.341933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.351134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.351165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.351176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.360531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.360561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.360572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.370135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.370168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.370179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.379298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.379329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.379339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.391375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.391405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.391416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.791 [2024-11-17 22:22:54.402713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:57.791 [2024-11-17 22:22:54.402753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.791 [2024-11-17 22:22:54.402766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.414672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.414703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.414714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.425390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.425437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.425449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.435144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.435175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.435185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.445843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.445873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.445884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.454681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.454712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.454723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.466957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.466988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.467000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.478927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.478960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.478971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.489162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.489192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.489203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.501051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.501082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.501092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.513099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.051 [2024-11-17 22:22:54.513132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.051 [2024-11-17 22:22:54.513143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.051 [2024-11-17 22:22:54.525380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.525410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.525420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.536911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.536943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.536954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.546955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.546984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.546995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.557193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.557223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.557234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.569926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.569956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.569967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.581580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.581612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.581622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.592852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.592883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.592893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.601937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.601967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.601978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.614440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.614470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.614481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.625831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.625861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.625871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.638511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.638540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.638551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.650706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.650746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.650759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.052 [2024-11-17 22:22:54.663468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.052 [2024-11-17 22:22:54.663498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.052 [2024-11-17 22:22:54.663509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.311 [2024-11-17 22:22:54.671793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.311 [2024-11-17 22:22:54.671822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.311 [2024-11-17 22:22:54.671834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.311 [2024-11-17 22:22:54.684210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.311 [2024-11-17 22:22:54.684241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.311 [2024-11-17 22:22:54.684252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.696040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.696070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.696081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.708136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.708166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.708177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.716089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.716119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.716130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.728320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.728350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.728361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.740189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.740220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.740230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.751680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.751710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.751721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.762380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.762426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.762451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.774885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.774916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.774927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.783834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.783863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.783874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.795534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.795565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.795576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.804220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.804250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.804261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.813021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.813052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.813063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.821455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.821485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.821496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.831177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.831207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.831218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.841081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.841113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.841124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.853482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.853527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.853539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.866271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.866303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.866314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.877157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.877187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.877199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.886407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.886438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.886448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.895755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.895785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.895795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.907327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.907373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.907384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.312 [2024-11-17 22:22:54.917360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.312 [2024-11-17 22:22:54.917391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.312 [2024-11-17 22:22:54.917402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.571 [2024-11-17 22:22:54.928524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.571 [2024-11-17 22:22:54.928571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:54.928583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:54.940511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:54.940541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:54.940551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:54.951980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:54.952010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:54.952021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:54.963881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:54.963911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:54.963922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:54.975936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:54.975965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:54.975976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:54.988518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:54.988548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:54.988559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.000467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.000498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.000509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.009481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.009510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.009520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.019786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.019815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.019826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.032152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.032183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.032195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.043534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.043564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.043574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.055373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.055403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.055414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.065540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.065570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.065581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.075254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.075284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.075294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.087305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.087335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.087346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.098196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.098230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.098240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.109835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.109864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.109875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.122621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.122653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.122663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.134049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.134080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.134091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.143607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.143637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.143647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.153907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.153937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.153948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.163764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.163793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.163804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.173419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.173465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.173476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.572 [2024-11-17 22:22:55.182741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.572 [2024-11-17 22:22:55.182797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.572 [2024-11-17 22:22:55.182810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.194523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.194569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.194581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.204703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.204758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.204771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.213240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.213271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.213282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.222882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.222910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.222921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.231308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.231337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.231348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.241327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.241357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.241367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.250576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.250607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.250618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.259480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.259510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.259521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.268398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.268428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.268439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.278764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.278804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.278815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.288388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.288417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.288428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.297039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.297068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.297078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.309140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.309169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.309179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.322169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.322200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.322211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.333981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.334033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.334045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.346057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.346088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.832 [2024-11-17 22:22:55.346099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.832 [2024-11-17 22:22:55.357344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.832 [2024-11-17 22:22:55.357373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.833 [2024-11-17 22:22:55.357384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.833 [2024-11-17 22:22:55.368331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.833 [2024-11-17 22:22:55.368361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.833 [2024-11-17 22:22:55.368372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.833 [2024-11-17 22:22:55.380379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.833 [2024-11-17 22:22:55.380409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.833 [2024-11-17 22:22:55.380420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.833 [2024-11-17 22:22:55.389508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.833 [2024-11-17 22:22:55.389537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.833 [2024-11-17 22:22:55.389548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.833 [2024-11-17 22:22:55.401271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.833 [2024-11-17 22:22:55.401316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.833 [2024-11-17 22:22:55.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.833 [2024-11-17 22:22:55.410993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.833 [2024-11-17 22:22:55.411022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.833 [2024-11-17 22:22:55.411032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.833 [2024-11-17 22:22:55.420472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.833 [2024-11-17 22:22:55.420502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.833 [2024-11-17 22:22:55.420512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.833 [2024-11-17 22:22:55.429937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.833 [2024-11-17 22:22:55.429967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.833 [2024-11-17 22:22:55.429978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.833 [2024-11-17 22:22:55.441482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:58.833 [2024-11-17 22:22:55.441515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.833 [2024-11-17 22:22:55.441527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.092 [2024-11-17 22:22:55.452939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.092 [2024-11-17 22:22:55.452970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.092 [2024-11-17 22:22:55.452982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.092 [2024-11-17 22:22:55.463180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.092 [2024-11-17 22:22:55.463211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.092 [2024-11-17 22:22:55.463222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.092 [2024-11-17 22:22:55.473520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.092 [2024-11-17 22:22:55.473551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.092 [2024-11-17 22:22:55.473561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.092 [2024-11-17 22:22:55.483333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.092 [2024-11-17 22:22:55.483363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.092 [2024-11-17 22:22:55.483373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.092 [2024-11-17 22:22:55.494628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.092 [2024-11-17 22:22:55.494659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.092 [2024-11-17 22:22:55.494670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.092 [2024-11-17 22:22:55.505770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.092 [2024-11-17 22:22:55.505797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.092 [2024-11-17 22:22:55.505808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.092 [2024-11-17 22:22:55.516047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.516079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.516090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.526700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.526729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.526769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.536865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.536895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.536905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.547763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.547792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.547803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.558989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.559020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.559030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.569897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.569927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.569937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.579949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.579980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.579991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.592129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.592191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.592202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.601944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.601990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.602008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.612984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.613016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.613027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.621516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.621562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.621573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.634475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.634522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.634534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.646775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.646803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.646814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.658858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.658888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.658899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.669042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.669072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.669083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.678373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.678418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.678444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 [2024-11-17 22:22:55.685984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba0f50) 00:22:59.093 [2024-11-17 22:22:55.686053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.093 [2024-11-17 22:22:55.686065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.093 00:22:59.093 Latency(us) 00:22:59.093 [2024-11-17T22:22:55.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.093 [2024-11-17T22:22:55.708Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:59.093 nvme0n1 : 2.00 23554.39 92.01 0.00 0.00 5428.67 2249.08 17992.61 00:22:59.093 [2024-11-17T22:22:55.708Z] =================================================================================================================== 00:22:59.093 [2024-11-17T22:22:55.708Z] Total : 23554.39 92.01 0.00 0.00 5428.67 2249.08 17992.61 00:22:59.093 0 00:22:59.352 22:22:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:59.352 22:22:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:59.352 22:22:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:59.352 | .driver_specific 00:22:59.352 | .nvme_error 00:22:59.352 | .status_code 00:22:59.352 | .command_transient_transport_error' 00:22:59.352 22:22:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:59.611 22:22:55 -- host/digest.sh@71 -- # (( 185 > 0 )) 00:22:59.611 22:22:55 -- host/digest.sh@73 -- # killprocess 87130 00:22:59.611 22:22:55 -- common/autotest_common.sh@936 -- # '[' -z 87130 ']' 00:22:59.611 22:22:55 -- common/autotest_common.sh@940 -- # kill -0 87130 00:22:59.611 22:22:55 -- common/autotest_common.sh@941 -- # uname 00:22:59.611 22:22:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:59.611 22:22:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87130 00:22:59.611 22:22:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:59.611 22:22:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:59.611 22:22:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87130' 00:22:59.611 killing process with pid 87130 00:22:59.611 Received shutdown signal, test time was about 2.000000 seconds 00:22:59.611 00:22:59.611 Latency(us) 00:22:59.611 [2024-11-17T22:22:56.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.611 [2024-11-17T22:22:56.226Z] =================================================================================================================== 00:22:59.611 [2024-11-17T22:22:56.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.611 22:22:56 -- common/autotest_common.sh@955 -- # kill 87130 00:22:59.611 22:22:56 -- common/autotest_common.sh@960 -- # wait 87130 00:22:59.870 22:22:56 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:59.870 22:22:56 -- host/digest.sh@54 -- # local rw bs qd 00:22:59.870 22:22:56 -- host/digest.sh@56 -- # rw=randread 00:22:59.870 22:22:56 -- host/digest.sh@56 -- # bs=131072 00:22:59.870 22:22:56 -- host/digest.sh@56 -- # qd=16 00:22:59.870 22:22:56 -- host/digest.sh@58 -- # bperfpid=87217 00:22:59.870 22:22:56 -- host/digest.sh@60 -- # waitforlisten 87217 /var/tmp/bperf.sock 00:22:59.870 22:22:56 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:59.870 22:22:56 -- common/autotest_common.sh@829 -- # '[' -z 87217 ']' 00:22:59.870 22:22:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:59.870 22:22:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:59.870 22:22:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:59.870 22:22:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.870 22:22:56 -- common/autotest_common.sh@10 -- # set +x 00:22:59.870 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:59.870 Zero copy mechanism will not be used. 00:22:59.870 [2024-11-17 22:22:56.377373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:59.871 [2024-11-17 22:22:56.377472] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87217 ] 00:23:00.130 [2024-11-17 22:22:56.513102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.130 [2024-11-17 22:22:56.601990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.697 22:22:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.697 22:22:57 -- common/autotest_common.sh@862 -- # return 0 00:23:00.697 22:22:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:00.697 22:22:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:01.264 22:22:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:01.264 22:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.264 22:22:57 -- common/autotest_common.sh@10 -- # set +x 00:23:01.264 22:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.264 22:22:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:01.264 22:22:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:01.524 nvme0n1 00:23:01.524 22:22:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:01.524 22:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.524 22:22:57 -- common/autotest_common.sh@10 -- # set +x 00:23:01.524 22:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.524 22:22:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:01.524 22:22:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:01.524 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:01.524 Zero copy mechanism will not be used. 00:23:01.524 Running I/O for 2 seconds... 00:23:01.524 [2024-11-17 22:22:58.039854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.039900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.039913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.043566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.043598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.043610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.047417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.047448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.047458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.050903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.050933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.050944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.054359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.054405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.054415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.058146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.058177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.058189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.061626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.061657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.061668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.065519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.065549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.065560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.069425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.069455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.069466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.073359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.073390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.073400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.077137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.077167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.077178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.080681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.080711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.080722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.084507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.084537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.084547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.088260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.088289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.088299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.091432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.091462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.091472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.095177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.095208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.095218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.099024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.099052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.099063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.102722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.102761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.102772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.106105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.106138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.106149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.109705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.109748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.109761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.113728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.113769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.113781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.117377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.117407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.117417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.121428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.121457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-11-17 22:22:58.121467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.524 [2024-11-17 22:22:58.124794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.524 [2024-11-17 22:22:58.124820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-11-17 22:22:58.124831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.525 [2024-11-17 22:22:58.128498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.525 [2024-11-17 22:22:58.128529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-11-17 22:22:58.128539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.525 [2024-11-17 22:22:58.132131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.525 [2024-11-17 22:22:58.132179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-11-17 22:22:58.132191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.525 [2024-11-17 22:22:58.135405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.525 [2024-11-17 22:22:58.135436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-11-17 22:22:58.135446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.139305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.139352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.139363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.143092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.143123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.143133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.147051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.147082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.147092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.150144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.150178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.150190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.154032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.154082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.154094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.157394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.157424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.157434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.160836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.160874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.160884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.164380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.164410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.164420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.167634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.167663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.167673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.171510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.171541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.171551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.175304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.175335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.175345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.178856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.178885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.786 [2024-11-17 22:22:58.178896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.786 [2024-11-17 22:22:58.182481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.786 [2024-11-17 22:22:58.182527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.182538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.185411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.185457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.185481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.188926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.188957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.188968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.192219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.192249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.192259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.196103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.196134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.196144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.199569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.199599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.199610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.202967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.202996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.203006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.205854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.205884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.205894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.209708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.209748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.209760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.213280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.213311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.213322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.216647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.216676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.216687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.220216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.220245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.220256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.224265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.224293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.224303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.228092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.228122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.228132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.231532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.231562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.231573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.235305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.235334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.235345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.238962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.238992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.239003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.242394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.242422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.242433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.245646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.245676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.245686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.249313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.249343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.249354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.252770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.252797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.252807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.257067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.257097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.257108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.260552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.260583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.260594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.264250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.264280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.264290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.267402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.267432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.267442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.270900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.270931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.270941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.274378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.274409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.787 [2024-11-17 22:22:58.274420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.787 [2024-11-17 22:22:58.277815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.787 [2024-11-17 22:22:58.277841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.277852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.281473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.281521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.281532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.284814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.284843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.284853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.288181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.288211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.288222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.292299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.292328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.292338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.295400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.295430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.295440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.298518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.298548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.298559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.302193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.302225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.302236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.305533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.305562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.305573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.308846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.308875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.308885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.312688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.312717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.312727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.316539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.316567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.316577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.319703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.319732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.319753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.323924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.323955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.323966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.327721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.327758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.327770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.331065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.331095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.331105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.334603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.334632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.334643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.338623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.338671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.338683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.342529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.342574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.342586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.345957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.346026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.346038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.349578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.349608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.349618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.353242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.353288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.353299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.357020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.357068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.357079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.359942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.359974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.359984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.364262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.364292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.364302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.367828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.367851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.367861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.371398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.371428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.788 [2024-11-17 22:22:58.371438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.788 [2024-11-17 22:22:58.375315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.788 [2024-11-17 22:22:58.375345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.789 [2024-11-17 22:22:58.375356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.789 [2024-11-17 22:22:58.378887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.789 [2024-11-17 22:22:58.378916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.789 [2024-11-17 22:22:58.378926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.789 [2024-11-17 22:22:58.382458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.789 [2024-11-17 22:22:58.382488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.789 [2024-11-17 22:22:58.382515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.789 [2024-11-17 22:22:58.386380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.789 [2024-11-17 22:22:58.386425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.789 [2024-11-17 22:22:58.386450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.789 [2024-11-17 22:22:58.389891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.789 [2024-11-17 22:22:58.389920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.789 [2024-11-17 22:22:58.389930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.789 [2024-11-17 22:22:58.393713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.789 [2024-11-17 22:22:58.393769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.789 [2024-11-17 22:22:58.393781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.789 [2024-11-17 22:22:58.396442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:01.789 [2024-11-17 22:22:58.396471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.789 [2024-11-17 22:22:58.396482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.400668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.400712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.400722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.404464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.404511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.404523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.408684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.408712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.408723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.412358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.412387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.412397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.415803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.415829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.415839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.419560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.419590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.419600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.423328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.423358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.423369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.426883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.426912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.426923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.430178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.430208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.430219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.434260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.434292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.434303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.438188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.438220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.438231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.442132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.442161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.442172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.445688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.445717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.445728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.449082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.449113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.449123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.453066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.453097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.453108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.455976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.456005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.456015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.459570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.459599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.459609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.463078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.463108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.463118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.466984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.467014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.467025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.471287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.471317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.471328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.475270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.475301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.475311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.478786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.478842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.478853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.482460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.482491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.482501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.485732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.485794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.485805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.050 [2024-11-17 22:22:58.489218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.050 [2024-11-17 22:22:58.489248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.050 [2024-11-17 22:22:58.489259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.492921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.492951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.492962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.496184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.496214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.496225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.499850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.499880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.499890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.503722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.503761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.503772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.506942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.506971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.506981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.510317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.510395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.510406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.513905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.513936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.513945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.517687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.517717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.517727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.521161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.521190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.521201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.524889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.524919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.524929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.527538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.527567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.527577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.531268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.531297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.531307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.535612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.535642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.535652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.539437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.539482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.539493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.543306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.543352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.543362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.547311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.547341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.547352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.551149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.551194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.551205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.555083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.555130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.555153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.559351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.559380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.559390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.562982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.563028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.563039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.566673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.566703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.566714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.570086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.570118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.570129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.573942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.573973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.573984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.577490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.577521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.577531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.581050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.581080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.581091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.584577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.584606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.584616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.051 [2024-11-17 22:22:58.588274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.051 [2024-11-17 22:22:58.588303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.051 [2024-11-17 22:22:58.588314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.591879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.591908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.591919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.595229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.595259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.595269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.599069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.599099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.599109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.603281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.603311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.603321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.606798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.606826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.606837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.610911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.610941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.610952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.613944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.613974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.613985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.617885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.617916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.617927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.621440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.621470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.621480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.624562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.624592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.624602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.628089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.628119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.628129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.631229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.631260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.631270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.634897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.634925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.634935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.638614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.638644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.638655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.641895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.641925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.641935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.645416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.645446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.645456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.649010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.649041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.649052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.652779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.652805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.652816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.656070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.656116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.656141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.052 [2024-11-17 22:22:58.660075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.052 [2024-11-17 22:22:58.660122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.052 [2024-11-17 22:22:58.660132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.663979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.664010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.664020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.667385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.667433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.667459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.670794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.670835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.670845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.673649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.673679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.673689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.677044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.677074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.677099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.681124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.681155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.681166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.684711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.684753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.684764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.688072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.688102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.688113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.691590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.691620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.691631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.694999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.695029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.695040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.699046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.699076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.699087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.702812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.702840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.702850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.706676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.706706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.706717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.710268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.710300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.710310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.313 [2024-11-17 22:22:58.713710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.313 [2024-11-17 22:22:58.713749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.313 [2024-11-17 22:22:58.713760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.717557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.717586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.717598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.720827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.720857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.720867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.724867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.724894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.724905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.728027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.728057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.728067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.731186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.731216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.731226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.734643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.734673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.734684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.737902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.737932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.737942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.741751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.741796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.741807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.744707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.744762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.744774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.748724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.748780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.748792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.752427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.752472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.752483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.756177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.756222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.756233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.760058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.760104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.760115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.763666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.763711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.763722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.766934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.766978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.766990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.770644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.770690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.770700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.774239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.774272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.774300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.777755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.777783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.777794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.780559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.780590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.780600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.783990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.784021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.784031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.787842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.787872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.787882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.791350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.791380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.791389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.794820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.314 [2024-11-17 22:22:58.794850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.314 [2024-11-17 22:22:58.794860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.314 [2024-11-17 22:22:58.798621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.798652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.798663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.802072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.802103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.802114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.805516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.805546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.805557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.808585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.808615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.808625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.811420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.811451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.811462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.815271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.815301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.815311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.818676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.818707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.818717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.821927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.821958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.821968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.825536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.825582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.825593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.828828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.828874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.828884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.832406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.832451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.832463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.836045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.836090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.836101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.839934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.839965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.839975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.843357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.843387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.843398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.847251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.847282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.847292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.850676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.850706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.850716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.854176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.854209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.854219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.857961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.857991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.858024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.861500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.861531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.861541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.864718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.864755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.864766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.867568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.867598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.867609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.870855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.870885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.870895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.874677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.874706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.874716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.878140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.878170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.878180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.315 [2024-11-17 22:22:58.881915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.315 [2024-11-17 22:22:58.881945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.315 [2024-11-17 22:22:58.881956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.885472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.885503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.885513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.888944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.888974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.888985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.893082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.893110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.893120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.896503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.896533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.896542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.900003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.900033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.900043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.903759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.903788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.903797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.907995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.908025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.908036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.911545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.911574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.911584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.915309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.915340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.915350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.918601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.918632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.918642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.316 [2024-11-17 22:22:58.922800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.316 [2024-11-17 22:22:58.922840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.316 [2024-11-17 22:22:58.922852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.926634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.926665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.926676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.929719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.929757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.929768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.933305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.933336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.933345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.937218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.937249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.937259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.940775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.940800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.940811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.944370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.944401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.944411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.947539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.947568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.947579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.951702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.951745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.951758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.955101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.955132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.955141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.958584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.958614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.958624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.961921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.961951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.961961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.965291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.965323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.965333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.969167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.969197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.969207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.973398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.973429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.973439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.975999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.976028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.976039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.980779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.577 [2024-11-17 22:22:58.980802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.577 [2024-11-17 22:22:58.980818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.577 [2024-11-17 22:22:58.984395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:58.984426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:58.984436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:58.988359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:58.988390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:58.988400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:58.991500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:58.991531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:58.991541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:58.995379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:58.995410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:58.995420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:58.998812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:58.998842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:58.998853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.002724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.002775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.002787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.006053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.006085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.006096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.009419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.009450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.009461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.012908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.012938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.012949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.016384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.016415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.016425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.020257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.020286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.020296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.024859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.024889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.024899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.028677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.028707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.028717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.032260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.032290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.032301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.035685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.035714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.035724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.039684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.039712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.039723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.043453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.043483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.043493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.046657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.046687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.046698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.050205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.050235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.050247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.053864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.053894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.053905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.057595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.578 [2024-11-17 22:22:59.057625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.578 [2024-11-17 22:22:59.057636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.578 [2024-11-17 22:22:59.060822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.060852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.060862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.064548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.064579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.064590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.068125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.068156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.068166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.071941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.071971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.071981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.074465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.074493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.074503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.078412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.078441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.078451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.081799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.081828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.081839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.085567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.085596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.085606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.089678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.089708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.089719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.092972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.093002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.093012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.096331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.096362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.096372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.099849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.099878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.099888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.103524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.103554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.103564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.106977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.107007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.107018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.110930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.110958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.110968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.114392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.114422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.114433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.117658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.117691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.117702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.121232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.121278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.121289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.124782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.124811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.124821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.127919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.127948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.127959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.131509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.131540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.131551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.135324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.135355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.135366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.138907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.138954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.138965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.142352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.142398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.142410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.145933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.145963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.579 [2024-11-17 22:22:59.145973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.579 [2024-11-17 22:22:59.149299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.579 [2024-11-17 22:22:59.149328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.149338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.153291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.153322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.153332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.156650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.156680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.156691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.160201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.160230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.160241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.163784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.163812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.163823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.166968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.166997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.167007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.170501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.170530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.170540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.174318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.174379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.174390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.176913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.176943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.176953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.180367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.180397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.180408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.184208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.184255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.184267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.580 [2024-11-17 22:22:59.187996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.580 [2024-11-17 22:22:59.188042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.580 [2024-11-17 22:22:59.188068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.192007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.192038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.192048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.195603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.195634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.195660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.199718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.199759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.199771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.203292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.203322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.203332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.206488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.206518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.206528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.210804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.210833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.210843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.214527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.214556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.214566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.217982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.218016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.218043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.220513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.220543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.220553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.224587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.224616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.224627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.228099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.228130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.228140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.231520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.231551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.231561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.235091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.840 [2024-11-17 22:22:59.235122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.840 [2024-11-17 22:22:59.235132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.840 [2024-11-17 22:22:59.238416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.238462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.238487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.241933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.241963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.241974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.245374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.245404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.245415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.248747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.248776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.248786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.252985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.253031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.253042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.256481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.256512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.256523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.259679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.259709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.259719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.263604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.263635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.263645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.266995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.267025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.267035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.270831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.270861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.270871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.274089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.274120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.274131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.277676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.277706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.277717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.281261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.281291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.281301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.284431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.284461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.284471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.287731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.287771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.287782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.291370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.291401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.291411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.295253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.295284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.295294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.298442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.298473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.298483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.301526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.301556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.301566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.304566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.304597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.304608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.308342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.308374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.308384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.311772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.311801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.311812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.315173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.315203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.315213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.319377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.319423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.319435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.323389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.323434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.323445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.325910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.325941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.325952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.329594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.841 [2024-11-17 22:22:59.329624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.841 [2024-11-17 22:22:59.329635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.841 [2024-11-17 22:22:59.333972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.334026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.334038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.337434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.337464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.337474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.341331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.341360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.341370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.344849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.344878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.344889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.348319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.348348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.348358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.351956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.351984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.351995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.354542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.354570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.354580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.358198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.358230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.358241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.361755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.361786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.361797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.365171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.365200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.365210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.368558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.368588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.368598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.372319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.372348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.372358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.375784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.375813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.375823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.379438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.379468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.379478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.382708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.382749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.382760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.386117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.386149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.386160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.389608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.389655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.389666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.393343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.393373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.393383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.396690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.396720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.396729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.400583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.400615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.400626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.404151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.404181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.404192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.408082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.408112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.408122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.412096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.412127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.412137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.415651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.415681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.415691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.419071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.419102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.419113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.423113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.423142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.842 [2024-11-17 22:22:59.423153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.842 [2024-11-17 22:22:59.427074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.842 [2024-11-17 22:22:59.427104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.843 [2024-11-17 22:22:59.427114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.843 [2024-11-17 22:22:59.430769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.843 [2024-11-17 22:22:59.430807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.843 [2024-11-17 22:22:59.430818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.843 [2024-11-17 22:22:59.434077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.843 [2024-11-17 22:22:59.434127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.843 [2024-11-17 22:22:59.434139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.843 [2024-11-17 22:22:59.437261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.843 [2024-11-17 22:22:59.437291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.843 [2024-11-17 22:22:59.437301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.843 [2024-11-17 22:22:59.440975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.843 [2024-11-17 22:22:59.441005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.843 [2024-11-17 22:22:59.441016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.843 [2024-11-17 22:22:59.444390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.843 [2024-11-17 22:22:59.444420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.843 [2024-11-17 22:22:59.444429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.843 [2024-11-17 22:22:59.448355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:02.843 [2024-11-17 22:22:59.448402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.843 [2024-11-17 22:22:59.448413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.452485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.452546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.452558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.456342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.456372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.456382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.459765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.459801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.459812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.463087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.463117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.463127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.466915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.466961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.466972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.470483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.470512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.470522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.474369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.474413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.474439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.477607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.477637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.477647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.481466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.481496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.481507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.484790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.484819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.484829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.488701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.488730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.488754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.492402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.492431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.492442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.496018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.496048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.496059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.498616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.498645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.498655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.502885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.502915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.502925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.506350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.506395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.506407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.510373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.510417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.510442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.514112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.514144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.514155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.517460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.517489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.517500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.521512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.521540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.521550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.525292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.104 [2024-11-17 22:22:59.525323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.104 [2024-11-17 22:22:59.525334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.104 [2024-11-17 22:22:59.528660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.528690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.528701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.531408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.531438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.531448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.534629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.534659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.534669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.537893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.537922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.537933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.542129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.542160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.542170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.545247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.545277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.545287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.548640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.548669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.548679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.552653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.552682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.552692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.555922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.555951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.555960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.559940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.559970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.559981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.564512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.564557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.564568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.568810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.568854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.568866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.573212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.573242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.573252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.577053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.577083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.577105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.581247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.581276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.581286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.585323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.585352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.585362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.589173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.589203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.589214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.592451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.592480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.592491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.596565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.596596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.596606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.600282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.600313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.600323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.603826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.603856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.603866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.607327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.607356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.607366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.610910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.610940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.610950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.614392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.614438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.614463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.617908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.617938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.617948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.621002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.621033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.105 [2024-11-17 22:22:59.621043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.105 [2024-11-17 22:22:59.624838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.105 [2024-11-17 22:22:59.624868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.624878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.628843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.628873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.628883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.631979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.632010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.632020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.635319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.635349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.635360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.638993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.639023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.639033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.642333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.642410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.642422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.646488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.646518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.646528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.649643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.649672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.649683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.653463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.653493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.653503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.656338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.656368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.656378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.659760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.659790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.659800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.663259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.663289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.663299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.667225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.667254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.667264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.670715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.670755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.670766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.674274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.674307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.674318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.677838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.677868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.677878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.681849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.681880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.681890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.684836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.684864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.684875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.688576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.688608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.688619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.691914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.691945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.691956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.695641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.695671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.695681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.698765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.698804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.698815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.702542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.702572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.702583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.706493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.706523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.706533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.710263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.710294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.710305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.106 [2024-11-17 22:22:59.713865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.106 [2024-11-17 22:22:59.713894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.106 [2024-11-17 22:22:59.713904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.717683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.717712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.717722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.721085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.721115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.721125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.724850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.724881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.724892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.728205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.728237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.728247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.731427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.731457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.731467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.735271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.735300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.735311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.739406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.739451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.739463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.742708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.742749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.742761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.746866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.746895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.746906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.750841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.750870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.750881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.754350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.754396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.754408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.757725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.757766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.757777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.761546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.761576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.761587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.764745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.764774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.764785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.768224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.768254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.768264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.771885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.771914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.771924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.775315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.775344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.775354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.778858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.778887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.778897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.782828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.782857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.782867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.786054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.786083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.786094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.790086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.368 [2024-11-17 22:22:59.790116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.368 [2024-11-17 22:22:59.790127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.368 [2024-11-17 22:22:59.793418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.793447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.793457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.797564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.797594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.797604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.801236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.801265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.801275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.805182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.805211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.805221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.809030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.809076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.809087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.812680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.812727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.812739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.816126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.816171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.816182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.819918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.819948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.819958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.823429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.823458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.823468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.827505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.827535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.827545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.831341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.831371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.831382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.835297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.835325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.835335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.838996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.839026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.839036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.842242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.842273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.842284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.845642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.845672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.845682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.848657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.848688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.848698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.852393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.852422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.852432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.856354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.856383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.856393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.860252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.860282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.860292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.863502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.863532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.863542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.866974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.867005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.867015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.870652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.870683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.870693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.874193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.874225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.874236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.877923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.877952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.877963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.881339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.881368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.881379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.884598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.884628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.884638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.887824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.369 [2024-11-17 22:22:59.887854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.369 [2024-11-17 22:22:59.887864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.369 [2024-11-17 22:22:59.891854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.891884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.891894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.895169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.895200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.895210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.898604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.898633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.898643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.902512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.902540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.902550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.905952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.905981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.905992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.909887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.909917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.909927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.913522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.913550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.913561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.917041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.917069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.917080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.920792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.920817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.920826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.923782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.923810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.923820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.927602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.927633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.927643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.931407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.931437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.931448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.934918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.934947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.934957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.938543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.938589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.938601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.942452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.942482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.942492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.945897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.945927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.945937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.950093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.950124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.950135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.953126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.953155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.953166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.956591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.956621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.956632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.960175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.960206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.960216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.963620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.963651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.963660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.967479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.967508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.967518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.971459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.971488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.971498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.370 [2024-11-17 22:22:59.975507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.370 [2024-11-17 22:22:59.975553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.370 [2024-11-17 22:22:59.975565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.629 [2024-11-17 22:22:59.979627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:22:59.979675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:22:59.979686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:22:59.983386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:22:59.983416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:22:59.983427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:22:59.987147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:22:59.987193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:22:59.987204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:22:59.990961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:22:59.990990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:22:59.991000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:22:59.995245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:22:59.995275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:22:59.995286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:22:59.997776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:22:59.997819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:22:59.997830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:23:00.002102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:23:00.002138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:23:00.002150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:23:00.006101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:23:00.006135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:23:00.006148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:23:00.009888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:23:00.009936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:23:00.009947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:23:00.014027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:23:00.014063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:23:00.014077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:23:00.017700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:23:00.017759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:23:00.017773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:23:00.021888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:23:00.021933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:23:00.021944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:23:00.027058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:23:00.027105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:23:00.027127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.630 [2024-11-17 22:23:00.031324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9d87e0) 00:23:03.630 [2024-11-17 22:23:00.031370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.630 [2024-11-17 22:23:00.031380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.630 00:23:03.630 Latency(us) 00:23:03.630 [2024-11-17T22:23:00.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.630 [2024-11-17T22:23:00.245Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:03.630 nvme0n1 : 2.00 8550.75 1068.84 0.00 0.00 1868.17 510.14 7328.12 00:23:03.630 [2024-11-17T22:23:00.245Z] =================================================================================================================== 00:23:03.630 [2024-11-17T22:23:00.245Z] Total : 8550.75 1068.84 0.00 0.00 1868.17 510.14 7328.12 00:23:03.630 0 00:23:03.630 22:23:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:03.630 22:23:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:03.630 22:23:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:03.630 | .driver_specific 00:23:03.630 | .nvme_error 00:23:03.630 | .status_code 00:23:03.630 | .command_transient_transport_error' 00:23:03.630 22:23:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:03.889 22:23:00 -- host/digest.sh@71 -- # (( 552 > 0 )) 00:23:03.889 22:23:00 -- host/digest.sh@73 -- # killprocess 87217 00:23:03.889 22:23:00 -- common/autotest_common.sh@936 -- # '[' -z 87217 ']' 00:23:03.889 22:23:00 -- common/autotest_common.sh@940 -- # kill -0 87217 00:23:03.889 22:23:00 -- common/autotest_common.sh@941 -- # uname 00:23:03.889 22:23:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.889 22:23:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87217 00:23:03.889 22:23:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:03.889 22:23:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:03.889 killing process with pid 87217 00:23:03.889 22:23:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87217' 00:23:03.889 Received shutdown signal, test time was about 2.000000 seconds 00:23:03.889 00:23:03.889 Latency(us) 00:23:03.889 [2024-11-17T22:23:00.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.889 [2024-11-17T22:23:00.504Z] =================================================================================================================== 00:23:03.889 [2024-11-17T22:23:00.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.889 22:23:00 -- common/autotest_common.sh@955 -- # kill 87217 00:23:03.889 22:23:00 -- common/autotest_common.sh@960 -- # wait 87217 00:23:04.148 22:23:00 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:04.148 22:23:00 -- host/digest.sh@54 -- # local rw bs qd 00:23:04.148 22:23:00 -- host/digest.sh@56 -- # rw=randwrite 00:23:04.148 22:23:00 -- host/digest.sh@56 -- # bs=4096 00:23:04.148 22:23:00 -- host/digest.sh@56 -- # qd=128 00:23:04.148 22:23:00 -- host/digest.sh@58 -- # bperfpid=87313 00:23:04.148 22:23:00 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:04.148 22:23:00 -- host/digest.sh@60 -- # waitforlisten 87313 /var/tmp/bperf.sock 00:23:04.148 22:23:00 -- common/autotest_common.sh@829 -- # '[' -z 87313 ']' 00:23:04.148 22:23:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:04.148 22:23:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.148 22:23:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:04.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:04.148 22:23:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.148 22:23:00 -- common/autotest_common.sh@10 -- # set +x 00:23:04.148 [2024-11-17 22:23:00.711765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:04.148 [2024-11-17 22:23:00.711842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87313 ] 00:23:04.419 [2024-11-17 22:23:00.840979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.419 [2024-11-17 22:23:00.930896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.394 22:23:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.394 22:23:01 -- common/autotest_common.sh@862 -- # return 0 00:23:05.394 22:23:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:05.394 22:23:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:05.652 22:23:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:05.653 22:23:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.653 22:23:02 -- common/autotest_common.sh@10 -- # set +x 00:23:05.653 22:23:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.653 22:23:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.653 22:23:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.912 nvme0n1 00:23:05.912 22:23:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:05.912 22:23:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.912 22:23:02 -- common/autotest_common.sh@10 -- # set +x 00:23:05.912 22:23:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.912 22:23:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:05.912 22:23:02 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:05.912 Running I/O for 2 seconds... 00:23:05.912 [2024-11-17 22:23:02.451814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eea00 00:23:05.912 [2024-11-17 22:23:02.452679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.912 [2024-11-17 22:23:02.452716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.912 [2024-11-17 22:23:02.462164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ea680 00:23:05.912 [2024-11-17 22:23:02.463439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.912 [2024-11-17 22:23:02.463486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.912 [2024-11-17 22:23:02.471144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eff18 00:23:05.912 [2024-11-17 22:23:02.472404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.912 [2024-11-17 22:23:02.472433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.912 [2024-11-17 22:23:02.481024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f0bc0 00:23:05.912 [2024-11-17 22:23:02.481960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.912 [2024-11-17 22:23:02.482026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:05.912 [2024-11-17 22:23:02.487749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f0ff8 00:23:05.912 [2024-11-17 22:23:02.487928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.912 [2024-11-17 22:23:02.487947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:05.912 [2024-11-17 22:23:02.497686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190edd58 00:23:05.912 [2024-11-17 22:23:02.498283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.912 [2024-11-17 22:23:02.498318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.912 [2024-11-17 22:23:02.506756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f0788 00:23:05.912 [2024-11-17 22:23:02.507282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.912 [2024-11-17 22:23:02.507312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:05.912 [2024-11-17 22:23:02.515636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ec408 00:23:05.912 [2024-11-17 22:23:02.517039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.912 [2024-11-17 22:23:02.517084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.171 [2024-11-17 22:23:02.525245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e95a0 00:23:06.171 [2024-11-17 22:23:02.525811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.525876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.534048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ec408 00:23:06.172 [2024-11-17 22:23:02.535555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.535583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.543620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e99d8 00:23:06.172 [2024-11-17 22:23:02.544216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.544245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.553743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e7818 00:23:06.172 [2024-11-17 22:23:02.554427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.554473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.564269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6458 00:23:06.172 [2024-11-17 22:23:02.565201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.565229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.573718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e01f8 00:23:06.172 [2024-11-17 22:23:02.575169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.575198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.583095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e9168 00:23:06.172 [2024-11-17 22:23:02.583849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.583878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.592427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f7100 00:23:06.172 [2024-11-17 22:23:02.593109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.593140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.601554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f7970 00:23:06.172 [2024-11-17 22:23:02.602239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.602269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.610913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f7da8 00:23:06.172 [2024-11-17 22:23:02.611598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.611628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.620167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6cc8 00:23:06.172 [2024-11-17 22:23:02.620692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.620722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.630111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e73e0 00:23:06.172 [2024-11-17 22:23:02.630873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.630927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.640513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e12d8 00:23:06.172 [2024-11-17 22:23:02.641595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.641624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.650689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e4140 00:23:06.172 [2024-11-17 22:23:02.651923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.651967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.660875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e5220 00:23:06.172 [2024-11-17 22:23:02.661403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.661433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.669912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190efae0 00:23:06.172 [2024-11-17 22:23:02.670748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.670817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.678621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f81e0 00:23:06.172 [2024-11-17 22:23:02.679358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.679389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.689542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f4298 00:23:06.172 [2024-11-17 22:23:02.690959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.691003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.700030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6458 00:23:06.172 [2024-11-17 22:23:02.701268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.701295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.706599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fd208 00:23:06.172 [2024-11-17 22:23:02.706691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.706710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.717996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fe720 00:23:06.172 [2024-11-17 22:23:02.718700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.718729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.726201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e01f8 00:23:06.172 [2024-11-17 22:23:02.727531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.727559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.738094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f2510 00:23:06.172 [2024-11-17 22:23:02.739241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.739283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.745207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e5220 00:23:06.172 [2024-11-17 22:23:02.745507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.745526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.755385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eb328 00:23:06.172 [2024-11-17 22:23:02.755866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.755894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.765368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f8a50 00:23:06.172 [2024-11-17 22:23:02.766244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.766276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.774532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e1710 00:23:06.172 [2024-11-17 22:23:02.775591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.172 [2024-11-17 22:23:02.775619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.172 [2024-11-17 22:23:02.783164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ec408 00:23:06.432 [2024-11-17 22:23:02.784433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.432 [2024-11-17 22:23:02.784479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:06.432 [2024-11-17 22:23:02.792345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ebfd0 00:23:06.432 [2024-11-17 22:23:02.792970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.432 [2024-11-17 22:23:02.793015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:06.432 [2024-11-17 22:23:02.801856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eea00 00:23:06.432 [2024-11-17 22:23:02.803235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.432 [2024-11-17 22:23:02.803264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:06.432 [2024-11-17 22:23:02.812726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f1ca0 00:23:06.432 [2024-11-17 22:23:02.813617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.432 [2024-11-17 22:23:02.813644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:06.432 [2024-11-17 22:23:02.819386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ed920 00:23:06.432 [2024-11-17 22:23:02.819548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.432 [2024-11-17 22:23:02.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:06.432 [2024-11-17 22:23:02.829386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e0a68 00:23:06.432 [2024-11-17 22:23:02.830141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.432 [2024-11-17 22:23:02.830172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:06.432 [2024-11-17 22:23:02.838369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e7818 00:23:06.432 [2024-11-17 22:23:02.838716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.432 [2024-11-17 22:23:02.838746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.847475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190feb58 00:23:06.433 [2024-11-17 22:23:02.847957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.848012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.856552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6890 00:23:06.433 [2024-11-17 22:23:02.857773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.857813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.865468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190dece0 00:23:06.433 [2024-11-17 22:23:02.866772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.866810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.874352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190edd58 00:23:06.433 [2024-11-17 22:23:02.875309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.875352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.883345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e38d0 00:23:06.433 [2024-11-17 22:23:02.884296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.884340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.892399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190df550 00:23:06.433 [2024-11-17 22:23:02.893344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.893373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.901475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f2948 00:23:06.433 [2024-11-17 22:23:02.902523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.902551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.910607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e0630 00:23:06.433 [2024-11-17 22:23:02.911595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.911639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.919691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f9f68 00:23:06.433 [2024-11-17 22:23:02.920822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.920850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.929138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f7538 00:23:06.433 [2024-11-17 22:23:02.930442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.930488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.938266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ebfd0 00:23:06.433 [2024-11-17 22:23:02.939160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.939203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.947432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fb8b8 00:23:06.433 [2024-11-17 22:23:02.948752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.948788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.957280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6020 00:23:06.433 [2024-11-17 22:23:02.958167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.958210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.965291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eaef0 00:23:06.433 [2024-11-17 22:23:02.966332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.966376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.974597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e6b70 00:23:06.433 [2024-11-17 22:23:02.975219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.975248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.982653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f5be8 00:23:06.433 [2024-11-17 22:23:02.983190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.983220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:02.994134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ef270 00:23:06.433 [2024-11-17 22:23:02.995127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:02.995154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:03.000745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ef6a8 00:23:06.433 [2024-11-17 22:23:03.000990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:03.001008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:03.010497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fc128 00:23:06.433 [2024-11-17 22:23:03.010913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:03.010938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:03.019817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f0bc0 00:23:06.433 [2024-11-17 22:23:03.020790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:03.020837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:03.027798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190edd58 00:23:06.433 [2024-11-17 22:23:03.028058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:03.028081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:06.433 [2024-11-17 22:23:03.037719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eea00 00:23:06.433 [2024-11-17 22:23:03.038627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.433 [2024-11-17 22:23:03.038670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.046849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ee190 00:23:06.693 [2024-11-17 22:23:03.048694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.048722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.056207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e5a90 00:23:06.693 [2024-11-17 22:23:03.057587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.057616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.065226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ed920 00:23:06.693 [2024-11-17 22:23:03.066431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.066475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.074235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e3d08 00:23:06.693 [2024-11-17 22:23:03.075309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.075336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.083118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190de038 00:23:06.693 [2024-11-17 22:23:03.084470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.084498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.092081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f4298 00:23:06.693 [2024-11-17 22:23:03.093178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.093205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.101040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e0630 00:23:06.693 [2024-11-17 22:23:03.102028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.102073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.110678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e23b8 00:23:06.693 [2024-11-17 22:23:03.111279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.111308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.118449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e6fa8 00:23:06.693 [2024-11-17 22:23:03.119155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.119184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.127402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eaab8 00:23:06.693 [2024-11-17 22:23:03.128378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.128407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.136245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eaab8 00:23:06.693 [2024-11-17 22:23:03.137381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.137409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.145258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ff3c8 00:23:06.693 [2024-11-17 22:23:03.145579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.145601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.154431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ea680 00:23:06.693 [2024-11-17 22:23:03.154954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.154984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.163173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ec408 00:23:06.693 [2024-11-17 22:23:03.164278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.164305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.172663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190de038 00:23:06.693 [2024-11-17 22:23:03.174034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.174061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:06.693 [2024-11-17 22:23:03.183522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f5378 00:23:06.693 [2024-11-17 22:23:03.184398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.693 [2024-11-17 22:23:03.184440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.190169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f81e0 00:23:06.694 [2024-11-17 22:23:03.190275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.190293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.200065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fa3a0 00:23:06.694 [2024-11-17 22:23:03.200522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.200547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.208901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e88f8 00:23:06.694 [2024-11-17 22:23:03.209132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.209151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.217274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e3d08 00:23:06.694 [2024-11-17 22:23:03.217487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.217505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.228215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fb480 00:23:06.694 [2024-11-17 22:23:03.228834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.228881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.235938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e1b48 00:23:06.694 [2024-11-17 22:23:03.236778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.236845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.244746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ec840 00:23:06.694 [2024-11-17 22:23:03.245937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.245980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.253805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e84c0 00:23:06.694 [2024-11-17 22:23:03.254168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.254194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.264885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190edd58 00:23:06.694 [2024-11-17 22:23:03.265737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.265800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.272769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e2c28 00:23:06.694 [2024-11-17 22:23:03.273781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.273827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.281554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fe720 00:23:06.694 [2024-11-17 22:23:03.283063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.283091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.290483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e7818 00:23:06.694 [2024-11-17 22:23:03.291653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.291681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.694 [2024-11-17 22:23:03.299434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e27f0 00:23:06.694 [2024-11-17 22:23:03.299826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.694 [2024-11-17 22:23:03.299850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:06.954 [2024-11-17 22:23:03.308577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190efae0 00:23:06.954 [2024-11-17 22:23:03.309824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.954 [2024-11-17 22:23:03.309878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.954 [2024-11-17 22:23:03.317483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e23b8 00:23:06.954 [2024-11-17 22:23:03.318665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.954 [2024-11-17 22:23:03.318693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:06.954 [2024-11-17 22:23:03.326476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f57b0 00:23:06.954 [2024-11-17 22:23:03.327595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.954 [2024-11-17 22:23:03.327623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.954 [2024-11-17 22:23:03.335515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eb760 00:23:06.954 [2024-11-17 22:23:03.336827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.954 [2024-11-17 22:23:03.336854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.954 [2024-11-17 22:23:03.344621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190efae0 00:23:06.954 [2024-11-17 22:23:03.345688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.954 [2024-11-17 22:23:03.345715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:06.954 [2024-11-17 22:23:03.353547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f4b08 00:23:06.954 [2024-11-17 22:23:03.354252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.954 [2024-11-17 22:23:03.354281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.954 [2024-11-17 22:23:03.362809] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fdeb0 00:23:06.954 [2024-11-17 22:23:03.363605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.954 [2024-11-17 22:23:03.363663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.954 [2024-11-17 22:23:03.371998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e3060 00:23:06.954 [2024-11-17 22:23:03.373178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.954 [2024-11-17 22:23:03.373205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.954 [2024-11-17 22:23:03.380415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e27f0 00:23:06.954 [2024-11-17 22:23:03.381291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.954 [2024-11-17 22:23:03.381318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.389408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fcdd0 00:23:06.955 [2024-11-17 22:23:03.389943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.389972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.398367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e23b8 00:23:06.955 [2024-11-17 22:23:03.398887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.398923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.409093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ea680 00:23:06.955 [2024-11-17 22:23:03.410199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.410227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.417955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f3e60 00:23:06.955 [2024-11-17 22:23:03.419107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.419134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.426841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ee190 00:23:06.955 [2024-11-17 22:23:03.427944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.427970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.435723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ff3c8 00:23:06.955 [2024-11-17 22:23:03.437044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.437072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.443444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f5be8 00:23:06.955 [2024-11-17 22:23:03.444498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.444526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.451880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f1ca0 00:23:06.955 [2024-11-17 22:23:03.452026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.452046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.460833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ef6a8 00:23:06.955 [2024-11-17 22:23:03.461142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.461166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.469715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ff3c8 00:23:06.955 [2024-11-17 22:23:03.470036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.470061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.478627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e7818 00:23:06.955 [2024-11-17 22:23:03.478870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.478889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.487525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e5ec8 00:23:06.955 [2024-11-17 22:23:03.487728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.487761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.496372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e95a0 00:23:06.955 [2024-11-17 22:23:03.496570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.496589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.506850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f3e60 00:23:06.955 [2024-11-17 22:23:03.508163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.508191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.515928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f57b0 00:23:06.955 [2024-11-17 22:23:03.517378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.517407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.524975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f1868 00:23:06.955 [2024-11-17 22:23:03.526422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.526450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.534021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e5658 00:23:06.955 [2024-11-17 22:23:03.535283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.535311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.542997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fd208 00:23:06.955 [2024-11-17 22:23:03.544470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.544499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.551818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fe720 00:23:06.955 [2024-11-17 22:23:03.552702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.552746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.955 [2024-11-17 22:23:03.559691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f1868 00:23:06.955 [2024-11-17 22:23:03.560009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.955 [2024-11-17 22:23:03.560032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:07.215 [2024-11-17 22:23:03.571176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f7da8 00:23:07.215 [2024-11-17 22:23:03.572026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.215 [2024-11-17 22:23:03.572070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:07.215 [2024-11-17 22:23:03.577827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fa7d8 00:23:07.215 [2024-11-17 22:23:03.577924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.215 [2024-11-17 22:23:03.577942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:07.215 [2024-11-17 22:23:03.587653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e1f80 00:23:07.215 [2024-11-17 22:23:03.587895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.215 [2024-11-17 22:23:03.587946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:07.215 [2024-11-17 22:23:03.596829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190efae0 00:23:07.215 [2024-11-17 22:23:03.597460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.215 [2024-11-17 22:23:03.597489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:07.215 [2024-11-17 22:23:03.605836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e6b70 00:23:07.215 [2024-11-17 22:23:03.606917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.215 [2024-11-17 22:23:03.606945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:07.215 [2024-11-17 22:23:03.614968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f9f68 00:23:07.215 [2024-11-17 22:23:03.615327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.215 [2024-11-17 22:23:03.615349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:07.215 [2024-11-17 22:23:03.626060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f7100 00:23:07.215 [2024-11-17 22:23:03.626947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.215 [2024-11-17 22:23:03.626989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:07.215 [2024-11-17 22:23:03.632648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ec840 00:23:07.215 [2024-11-17 22:23:03.632813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.215 [2024-11-17 22:23:03.632832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:07.215 [2024-11-17 22:23:03.644229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f2510 00:23:07.215 [2024-11-17 22:23:03.644943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.644972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.653584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ec408 00:23:07.216 [2024-11-17 22:23:03.654566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.654595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.663318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e4578 00:23:07.216 [2024-11-17 22:23:03.663517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.663535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.673188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fd640 00:23:07.216 [2024-11-17 22:23:03.673285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.673303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.684181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190efae0 00:23:07.216 [2024-11-17 22:23:03.685477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.685523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.694034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f0bc0 00:23:07.216 [2024-11-17 22:23:03.694832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.694904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.702863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f1430 00:23:07.216 [2024-11-17 22:23:03.704223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.704251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.712366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190df118 00:23:07.216 [2024-11-17 22:23:03.713688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.713717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.720387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f20d8 00:23:07.216 [2024-11-17 22:23:03.721437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.721465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.729075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e95a0 00:23:07.216 [2024-11-17 22:23:03.730179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.730227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.738221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e4140 00:23:07.216 [2024-11-17 22:23:03.738742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.738784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.746515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f2948 00:23:07.216 [2024-11-17 22:23:03.747256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.747287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.757363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ea680 00:23:07.216 [2024-11-17 22:23:03.757870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.757898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.767301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e7818 00:23:07.216 [2024-11-17 22:23:03.768123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.768178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.777381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ff3c8 00:23:07.216 [2024-11-17 22:23:03.777867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.777904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.787122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f35f0 00:23:07.216 [2024-11-17 22:23:03.787717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.787763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.796910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e6738 00:23:07.216 [2024-11-17 22:23:03.797418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.797449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.806708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f5be8 00:23:07.216 [2024-11-17 22:23:03.807348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.807396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.815839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eaab8 00:23:07.216 [2024-11-17 22:23:03.816501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.816530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:07.216 [2024-11-17 22:23:03.825165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190de8a8 00:23:07.216 [2024-11-17 22:23:03.826145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.216 [2024-11-17 22:23:03.826190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:07.476 [2024-11-17 22:23:03.834376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190efae0 00:23:07.476 [2024-11-17 22:23:03.835400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.476 [2024-11-17 22:23:03.835429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:07.476 [2024-11-17 22:23:03.844068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e99d8 00:23:07.476 [2024-11-17 22:23:03.844338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.476 [2024-11-17 22:23:03.844366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:07.476 [2024-11-17 22:23:03.853624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ed0b0 00:23:07.476 [2024-11-17 22:23:03.854378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.476 [2024-11-17 22:23:03.854408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:07.476 [2024-11-17 22:23:03.865412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ed0b0 00:23:07.476 [2024-11-17 22:23:03.866589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.476 [2024-11-17 22:23:03.866616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.476 [2024-11-17 22:23:03.872658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fc128 00:23:07.476 [2024-11-17 22:23:03.873422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.476 [2024-11-17 22:23:03.873449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:07.476 [2024-11-17 22:23:03.883686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e0a68 00:23:07.477 [2024-11-17 22:23:03.884403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.884433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.891344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6cc8 00:23:07.477 [2024-11-17 22:23:03.892557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.892600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.902449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fbcf0 00:23:07.477 [2024-11-17 22:23:03.903051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.903082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.911788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f4b08 00:23:07.477 [2024-11-17 22:23:03.912777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.912837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.920843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e3d08 00:23:07.477 [2024-11-17 22:23:03.922146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.922175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.930516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fc128 00:23:07.477 [2024-11-17 22:23:03.931189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.931217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.941287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f9f68 00:23:07.477 [2024-11-17 22:23:03.942537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.942565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.948554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ed0b0 00:23:07.477 [2024-11-17 22:23:03.948770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.948792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.960116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ebfd0 00:23:07.477 [2024-11-17 22:23:03.960906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.960934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.968291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e3060 00:23:07.477 [2024-11-17 22:23:03.969207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.969234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.977786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e3498 00:23:07.477 [2024-11-17 22:23:03.979020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.979049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.986582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fbcf0 00:23:07.477 [2024-11-17 22:23:03.987542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.987569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:03.996569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fc128 00:23:07.477 [2024-11-17 22:23:03.997747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:03.997772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.004975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f46d0 00:23:07.477 [2024-11-17 22:23:04.005607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.005650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.014036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f35f0 00:23:07.477 [2024-11-17 22:23:04.014250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.014268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.023371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f8a50 00:23:07.477 [2024-11-17 22:23:04.024144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.024186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.032430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ed0b0 00:23:07.477 [2024-11-17 22:23:04.033596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.033641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.041564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190dfdc0 00:23:07.477 [2024-11-17 22:23:04.042861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.042888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.050686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ed4e8 00:23:07.477 [2024-11-17 22:23:04.051787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.051813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.059706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ee190 00:23:07.477 [2024-11-17 22:23:04.060629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.060657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.068760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e4de8 00:23:07.477 [2024-11-17 22:23:04.068951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.068969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.077716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fa7d8 00:23:07.477 [2024-11-17 22:23:04.078097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.078122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:07.477 [2024-11-17 22:23:04.086980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eee38 00:23:07.477 [2024-11-17 22:23:04.087497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.477 [2024-11-17 22:23:04.087527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.096020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f0788 00:23:07.737 [2024-11-17 22:23:04.096313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.096336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.105006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f1868 00:23:07.737 [2024-11-17 22:23:04.105286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.105313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.114157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ecc78 00:23:07.737 [2024-11-17 22:23:04.114450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.114474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.123534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f8618 00:23:07.737 [2024-11-17 22:23:04.124545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.124579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.132047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f9f68 00:23:07.737 [2024-11-17 22:23:04.132213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.132231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.141109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fb048 00:23:07.737 [2024-11-17 22:23:04.141272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.141291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.150672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f1430 00:23:07.737 [2024-11-17 22:23:04.151860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.151889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.160059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ed4e8 00:23:07.737 [2024-11-17 22:23:04.160485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.160522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.169204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e27f0 00:23:07.737 [2024-11-17 22:23:04.169854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.169884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.178114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e3498 00:23:07.737 [2024-11-17 22:23:04.179182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.179210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.188079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e4140 00:23:07.737 [2024-11-17 22:23:04.189199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.189226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.196113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e4de8 00:23:07.737 [2024-11-17 22:23:04.196419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.196442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.204125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eff18 00:23:07.737 [2024-11-17 22:23:04.204191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.204209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.213930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f31b8 00:23:07.737 [2024-11-17 22:23:04.214148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.214167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.223912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f57b0 00:23:07.737 [2024-11-17 22:23:04.224364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.224402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.232297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190fdeb0 00:23:07.737 [2024-11-17 22:23:04.233210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.233253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.240639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f2948 00:23:07.737 [2024-11-17 22:23:04.240758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.240777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.249709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e3d08 00:23:07.737 [2024-11-17 22:23:04.250056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.737 [2024-11-17 22:23:04.250081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:07.737 [2024-11-17 22:23:04.258847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e84c0 00:23:07.737 [2024-11-17 22:23:04.259327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.259366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.267850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e49b0 00:23:07.738 [2024-11-17 22:23:04.268074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.268094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.276685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f9f68 00:23:07.738 [2024-11-17 22:23:04.276894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.276912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.286961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ee5c8 00:23:07.738 [2024-11-17 22:23:04.288227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.288256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.296140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e84c0 00:23:07.738 [2024-11-17 22:23:04.296796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.296825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.303849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6890 00:23:07.738 [2024-11-17 22:23:04.304709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.304763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.312659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6890 00:23:07.738 [2024-11-17 22:23:04.313820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.313846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.321606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6890 00:23:07.738 [2024-11-17 22:23:04.322898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.322925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.330777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f6890 00:23:07.738 [2024-11-17 22:23:04.331961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.331988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.339310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ec840 00:23:07.738 [2024-11-17 22:23:04.339825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.339862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:07.738 [2024-11-17 22:23:04.348304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f0788 00:23:07.738 [2024-11-17 22:23:04.348601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.738 [2024-11-17 22:23:04.348625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:07.996 [2024-11-17 22:23:04.357206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f4298 00:23:07.996 [2024-11-17 22:23:04.357431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.996 [2024-11-17 22:23:04.357449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:07.996 [2024-11-17 22:23:04.366300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190e95a0 00:23:07.996 [2024-11-17 22:23:04.366517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.996 [2024-11-17 22:23:04.366536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:07.996 [2024-11-17 22:23:04.377617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f1430 00:23:07.996 [2024-11-17 22:23:04.378653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.996 [2024-11-17 22:23:04.378679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:07.996 [2024-11-17 22:23:04.384389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ea248 00:23:07.996 [2024-11-17 22:23:04.384596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.996 [2024-11-17 22:23:04.384614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:07.996 [2024-11-17 22:23:04.395438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f92c0 00:23:07.996 [2024-11-17 22:23:04.396175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.996 [2024-11-17 22:23:04.396204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:07.996 [2024-11-17 22:23:04.403365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190ddc00 00:23:07.996 [2024-11-17 22:23:04.404253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.996 [2024-11-17 22:23:04.404297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:07.996 [2024-11-17 22:23:04.411936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190f4f40 00:23:07.996 [2024-11-17 22:23:04.412976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.996 [2024-11-17 22:23:04.413004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:07.996 [2024-11-17 22:23:04.422750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190dece0 00:23:07.996 [2024-11-17 22:23:04.423468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.996 [2024-11-17 22:23:04.423496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:07.996 [2024-11-17 22:23:04.430573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18778f0) with pdu=0x2000190eb760 00:23:07.996 [2024-11-17 22:23:04.431838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.996 [2024-11-17 22:23:04.431865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:07.996 00:23:07.996 Latency(us) 00:23:07.996 [2024-11-17T22:23:04.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.996 [2024-11-17T22:23:04.611Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:07.996 nvme0n1 : 2.00 27581.71 107.74 0.00 0.00 4635.71 1869.27 15252.01 00:23:07.996 [2024-11-17T22:23:04.611Z] =================================================================================================================== 00:23:07.996 [2024-11-17T22:23:04.611Z] Total : 27581.71 107.74 0.00 0.00 4635.71 1869.27 15252.01 00:23:07.996 0 00:23:07.996 22:23:04 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:07.996 22:23:04 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:07.996 22:23:04 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:07.996 | .driver_specific 00:23:07.996 | .nvme_error 00:23:07.996 | .status_code 00:23:07.996 | .command_transient_transport_error' 00:23:07.996 22:23:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:08.256 22:23:04 -- host/digest.sh@71 -- # (( 216 > 0 )) 00:23:08.256 22:23:04 -- host/digest.sh@73 -- # killprocess 87313 00:23:08.256 22:23:04 -- common/autotest_common.sh@936 -- # '[' -z 87313 ']' 00:23:08.256 22:23:04 -- common/autotest_common.sh@940 -- # kill -0 87313 00:23:08.256 22:23:04 -- common/autotest_common.sh@941 -- # uname 00:23:08.256 22:23:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:08.256 22:23:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87313 00:23:08.256 22:23:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:08.256 22:23:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:08.256 killing process with pid 87313 00:23:08.256 22:23:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87313' 00:23:08.256 Received shutdown signal, test time was about 2.000000 seconds 00:23:08.256 00:23:08.256 Latency(us) 00:23:08.256 [2024-11-17T22:23:04.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.256 [2024-11-17T22:23:04.871Z] =================================================================================================================== 00:23:08.256 [2024-11-17T22:23:04.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.256 22:23:04 -- common/autotest_common.sh@955 -- # kill 87313 00:23:08.256 22:23:04 -- common/autotest_common.sh@960 -- # wait 87313 00:23:08.514 22:23:05 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:08.514 22:23:05 -- host/digest.sh@54 -- # local rw bs qd 00:23:08.514 22:23:05 -- host/digest.sh@56 -- # rw=randwrite 00:23:08.514 22:23:05 -- host/digest.sh@56 -- # bs=131072 00:23:08.514 22:23:05 -- host/digest.sh@56 -- # qd=16 00:23:08.514 22:23:05 -- host/digest.sh@58 -- # bperfpid=87402 00:23:08.514 22:23:05 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:08.514 22:23:05 -- host/digest.sh@60 -- # waitforlisten 87402 /var/tmp/bperf.sock 00:23:08.514 22:23:05 -- common/autotest_common.sh@829 -- # '[' -z 87402 ']' 00:23:08.514 22:23:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:08.514 22:23:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:08.514 22:23:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:08.514 22:23:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.514 22:23:05 -- common/autotest_common.sh@10 -- # set +x 00:23:08.514 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:08.514 Zero copy mechanism will not be used. 00:23:08.514 [2024-11-17 22:23:05.114504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.514 [2024-11-17 22:23:05.114579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87402 ] 00:23:08.773 [2024-11-17 22:23:05.243968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.773 [2024-11-17 22:23:05.329588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.711 22:23:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.711 22:23:06 -- common/autotest_common.sh@862 -- # return 0 00:23:09.711 22:23:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:09.711 22:23:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:09.711 22:23:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:09.711 22:23:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.711 22:23:06 -- common/autotest_common.sh@10 -- # set +x 00:23:09.711 22:23:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.711 22:23:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:09.711 22:23:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:10.280 nvme0n1 00:23:10.280 22:23:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:10.280 22:23:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.280 22:23:06 -- common/autotest_common.sh@10 -- # set +x 00:23:10.280 22:23:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.280 22:23:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:10.280 22:23:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:10.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:10.280 Zero copy mechanism will not be used. 00:23:10.280 Running I/O for 2 seconds... 00:23:10.280 [2024-11-17 22:23:06.751774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.752103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.752135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.756078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.756301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.756322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.761049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.761155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.761178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.765858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.765966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.765986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.770537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.770645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.770667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.774566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.774637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.774658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.778757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.778884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.778904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.782932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.783095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.783116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.787012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.787147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.787167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.791118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.791230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.791251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.795146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.795281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.795301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.799328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.799401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.799421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.803400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.803474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.803494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.807563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.807674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.807694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.811702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.811852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.811872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.815865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.816011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.280 [2024-11-17 22:23:06.816032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.280 [2024-11-17 22:23:06.819968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.280 [2024-11-17 22:23:06.820155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.820176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.824132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.824225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.824245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.828270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.828369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.828389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.832414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.832498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.832519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.836551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.836632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.836653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.840909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.841051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.841072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.845123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.845295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.845315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.849322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.849478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.849498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.853439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.853562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.853582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.857677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.857820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.857842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.861883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.862035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.862056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.866074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.866200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.866221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.870246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.870335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.870357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.874541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.874665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.874686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.878690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.878913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.878933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.882954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.883117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.883137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.887076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.887236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.887255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.281 [2024-11-17 22:23:06.891387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.281 [2024-11-17 22:23:06.891491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.281 [2024-11-17 22:23:06.891511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.895702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.895858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.895879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.900026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.900138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.900158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.904146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.904228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.904247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.908387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.908510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.908530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.912463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.912649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.912668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.916698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.916858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.916878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.920840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.920956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.920975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.924933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.925037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.925057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.929018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.929146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.929165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.933068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.933166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.933186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.937195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.937275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.937295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.941296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.941417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.941437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.945438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.945578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.945598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.949671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.949832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.949854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.953707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.953874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.953895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.957830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.543 [2024-11-17 22:23:06.957934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.543 [2024-11-17 22:23:06.957954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.543 [2024-11-17 22:23:06.961848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.961949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.961968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:06.965964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.966084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.966105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:06.969994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.970094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.970115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:06.974139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.974276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.974297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:06.978307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.978551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.978587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:06.982590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.982759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.982780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:06.986771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.986923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.986942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:06.990803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.990923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.990943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:06.994869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.995034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.995053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:06.998929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:06.999039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:06.999059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.002983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.003093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.003113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.007036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.007164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.007184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.011166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.011363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.011383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.015493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.015639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.015659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.019609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.019803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.019822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.023620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.023784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.023805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.027728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.027839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.027859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.031929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.032038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.032058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.035986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.036060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.036081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.040055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.040175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.040195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.044091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.044239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.044258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.048262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.048425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.048446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.052336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.052503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.052522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.056387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.056504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.056524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.060582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.060683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.060704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.064617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.064727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.064761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.068787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.544 [2024-11-17 22:23:07.068897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-11-17 22:23:07.068916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-11-17 22:23:07.072927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.073047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.073066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.077054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.077252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.077272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.081334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.081467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.081487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.085360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.085476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.085496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.089536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.089652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.089671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.093566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.093673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.093693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.097689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.097811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.097831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.101725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.101812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.101833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.105722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.105858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.105879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.109838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.110051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.110072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.113904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.114103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.114124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.117966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.118158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.118179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.121931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.122128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.122148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.126073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.126219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.126240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.130084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.130171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.130192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.134125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.134236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.134257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.138171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.138297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.138317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.142681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.142868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.142888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.146802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.146971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.146991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.545 [2024-11-17 22:23:07.150775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.545 [2024-11-17 22:23:07.150906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-11-17 22:23:07.150926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.154879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.154994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.155029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.158986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.159116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.159135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.163267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.163370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.163390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.167411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.167531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.167551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.171555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.171706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.171725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.175767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.176012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.176036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.179954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.180135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.180156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.184043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.184155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.184174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.188159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.188253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.188273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.192246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.192376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.192395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.196341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.196443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.196463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.200448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.200538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.200558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.204562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.204704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.204724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.208637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.208815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.208835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.212833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.212982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.213002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.216870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.217012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.217032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.220965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.221076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.221097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.225049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.225203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.225222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.229099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.229210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.229230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.233165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.807 [2024-11-17 22:23:07.233273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-11-17 22:23:07.233309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-11-17 22:23:07.237311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.237471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.237490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.241407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.241706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.241731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.245561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.245732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.245765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.249754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.249881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.249900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.253849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.253929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.253949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.257974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.258129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.258151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.262190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.262294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.262315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.266214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.266362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.266381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.270319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.270499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.270519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.274457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.274664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.274683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.278542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.278778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.278799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.282556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.282691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.282718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.286608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.286686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.286706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.290681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.290852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.290872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.294610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.294717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.294748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.298624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.298702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.298722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.302689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.302856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.302877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.306762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.307011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.307046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.310724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.310824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.310844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.314832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.314938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.314960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.318952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.319098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.319118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.323044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.323176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.323195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.327127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.327209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.327229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.331187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.331293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.331311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.335540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.335720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.335740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.339681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.339956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.339983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.343719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.808 [2024-11-17 22:23:07.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-11-17 22:23:07.343852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.808 [2024-11-17 22:23:07.347892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.348058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.348083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.351898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.351978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.351998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.356038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.356161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.356182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.360219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.360309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.360330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.364358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.364430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.364450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.368494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.368636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.368656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.372620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.372909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.372934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.376688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.376780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.376800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.380827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.380949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.380968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.384837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.384949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.384969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.388934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.389076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.389096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.392999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.393106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.393126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.397120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.397226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.397246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.401308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.401452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.401472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.405439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.405671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.405690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.409530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.409606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.409626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.809 [2024-11-17 22:23:07.413967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:10.809 [2024-11-17 22:23:07.414130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.809 [2024-11-17 22:23:07.414152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.418600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.418786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.418807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.423553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.423677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.423697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.428389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.428478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.428498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.432950] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.433025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.433045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.437695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.437886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.437907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.442407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.442618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.442638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.447213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.447394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.447414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.451599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.451701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.451721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.456103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.456227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.456247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.460499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.460620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.460640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.465025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.465117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.465137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.469239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.469340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.469359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.473582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.473729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.473762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.477878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.478111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.478131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.482106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.482343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.482369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.486623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.486797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.486818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.490755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.490843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.490863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.494985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.495107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.495127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.499191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.499274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.499294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.503276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.503368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.503388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.507657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.507816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.507836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.511802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.511978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.511998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.515949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.516147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.516166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.520287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.520463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.520484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.524389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.524473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.524493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.528679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.528823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.528844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.532850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.532957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.532978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.536939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.537017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.537037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.541106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.541252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.541272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.545255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.545518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.545545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.549743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.549934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-11-17 22:23:07.549955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-11-17 22:23:07.553943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.069 [2024-11-17 22:23:07.554089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.554109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.557966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.558083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.558104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.562186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.562319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.562340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.566425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.566537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.566556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.570729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.570814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.570835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.574973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.575119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.575139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.579085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.579234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.579254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.583353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.583440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.583460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.587882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.587996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.588015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.592076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.592190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.592210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.596167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.596293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.596312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.600398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.600520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.600540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.604512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.604609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.604628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.609070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.609216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.609236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.613132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.613364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.613415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.617347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.617537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.617557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.621443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.621551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.621572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.625703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.625801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.625821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.630212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.630365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.630385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.634300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.634459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.634479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.638450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.638524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.638543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.642693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.642853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.642874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.646755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.646971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.647005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.651168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.651361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.651381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.655393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.655499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.655519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.659642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.659734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.659767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.663934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.664068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.664088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.668096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.668199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.668219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.672073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.672185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.672206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.676279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.070 [2024-11-17 22:23:07.676424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-11-17 22:23:07.676444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-11-17 22:23:07.680584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.330 [2024-11-17 22:23:07.680944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.330 [2024-11-17 22:23:07.680986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.330 [2024-11-17 22:23:07.684785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.330 [2024-11-17 22:23:07.684858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.330 [2024-11-17 22:23:07.684879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.330 [2024-11-17 22:23:07.688999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.330 [2024-11-17 22:23:07.689105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.330 [2024-11-17 22:23:07.689125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.330 [2024-11-17 22:23:07.693259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.330 [2024-11-17 22:23:07.693390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.330 [2024-11-17 22:23:07.693410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.697357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.697446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.697466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.701462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.701582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.701601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.705731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.705836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.705856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.709956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.710130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.710150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.714248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.714404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.714425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.718301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.718390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.718425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.722450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.722610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.722630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.726974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.727236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.727267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.731620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.731860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.731881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.736273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.736386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.736406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.740727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.740852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.740872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.745601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.745742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.745786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.750443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.750621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.750641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.754902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.755017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.755037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.759248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.759410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.759430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.763500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.763658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.763678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.767610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.767709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.767730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.771775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.771951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.771971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.775842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.775923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.775946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.780028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.780156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.780176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.784162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.784250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.784271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.788178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.788284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.788304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.792322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.792488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.792508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.796352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.796585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.796604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.800408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.800494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.800515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.804591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.804709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.331 [2024-11-17 22:23:07.804728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.331 [2024-11-17 22:23:07.808616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.331 [2024-11-17 22:23:07.808716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.808747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.812696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.812831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.812851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.816755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.816859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.816879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.820832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.820908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.820928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.824871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.825019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.825039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.829063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.829346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.829373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.833067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.833190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.833209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.837311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.837470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.837490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.841368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.841457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.841477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.845455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.845612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.845633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.849548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.849656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.849677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.853783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.853894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.853914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.858068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.858222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.858242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.862194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.862441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.862460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.866420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.866619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.866637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.870489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.870596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.870616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.874578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.874659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.874679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.878637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.878762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.878796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.882715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.882815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.882835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.886852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.886965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.886987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.890943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.891106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.891127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.895036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.895333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.895359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.899093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.899212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.899241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.903174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.903315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.903335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.907169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.907257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.907277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.911325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.911495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.911515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.915328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.915435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-11-17 22:23:07.915455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-11-17 22:23:07.919350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.332 [2024-11-17 22:23:07.919444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-11-17 22:23:07.919465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.333 [2024-11-17 22:23:07.923392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.333 [2024-11-17 22:23:07.923544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-11-17 22:23:07.923564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.333 [2024-11-17 22:23:07.927376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.333 [2024-11-17 22:23:07.927593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-11-17 22:23:07.927628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.333 [2024-11-17 22:23:07.931313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.333 [2024-11-17 22:23:07.931436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-11-17 22:23:07.931455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.333 [2024-11-17 22:23:07.935334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.333 [2024-11-17 22:23:07.935509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-11-17 22:23:07.935529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.333 [2024-11-17 22:23:07.939326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.333 [2024-11-17 22:23:07.939524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-11-17 22:23:07.939559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.593 [2024-11-17 22:23:07.943519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.593 [2024-11-17 22:23:07.943687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.593 [2024-11-17 22:23:07.943706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.593 [2024-11-17 22:23:07.947494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.593 [2024-11-17 22:23:07.947617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.593 [2024-11-17 22:23:07.947638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.593 [2024-11-17 22:23:07.951565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.593 [2024-11-17 22:23:07.951649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.593 [2024-11-17 22:23:07.951669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.955730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.955889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.955910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.959845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.960090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.960124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.963853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.964010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.964031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.967955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.968089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.968108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.972029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.972110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.972130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.976137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.976271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.976290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.980276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.980363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.980382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.984315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.984401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.984421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.988427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.988578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.988598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.992473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.992774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.992799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:07.996533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:07.996620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:07.996639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.000587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.000766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.000785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.004677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.004800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.004821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.008754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.008913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.008932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.012837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.012947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.012967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.016923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.017013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.017032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.021039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.021184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.021204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.025058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.025252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.025271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.029164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.029363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.029383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.033191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.033313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.033332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.037317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.037410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.037430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.041473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.041630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.041649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.045566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.045678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.045698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.049672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.049771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.049791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.053866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.054035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.054056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.057807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.594 [2024-11-17 22:23:08.058130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.594 [2024-11-17 22:23:08.058167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.594 [2024-11-17 22:23:08.061884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.061980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.062024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.066082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.066195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.066215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.070067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.070148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.070168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.074162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.074307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.074327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.078206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.078307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.078327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.082242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.082324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.082358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.086397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.086543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.086562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.090460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.090702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.090723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.094469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.094653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.094672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.098565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.098723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.098758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.102538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.102620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.102640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.106556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.106684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.106704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.110578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.110658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.110679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.114470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.114561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.114581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.118649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.118810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.118830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.122593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.122838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.122863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.126719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.126963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.126988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.130895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.131043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.131063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.134908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.134994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.135013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.139047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.139170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.139190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.143059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.143180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.143200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.147125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.147237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.147258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.151272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.151424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.151443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.155347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.155601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.155626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.159372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.159579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.159611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.163535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.163670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.163690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.595 [2024-11-17 22:23:08.167598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.595 [2024-11-17 22:23:08.167746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-11-17 22:23:08.167778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.596 [2024-11-17 22:23:08.171864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.596 [2024-11-17 22:23:08.171999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-11-17 22:23:08.172019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.596 [2024-11-17 22:23:08.175874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.596 [2024-11-17 22:23:08.175991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-11-17 22:23:08.176010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.596 [2024-11-17 22:23:08.179925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.596 [2024-11-17 22:23:08.180014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-11-17 22:23:08.180040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.596 [2024-11-17 22:23:08.184074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.596 [2024-11-17 22:23:08.184223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-11-17 22:23:08.184242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.596 [2024-11-17 22:23:08.188218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.596 [2024-11-17 22:23:08.188491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-11-17 22:23:08.188524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.596 [2024-11-17 22:23:08.192275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.596 [2024-11-17 22:23:08.192445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-11-17 22:23:08.192465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.596 [2024-11-17 22:23:08.196474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.596 [2024-11-17 22:23:08.196595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-11-17 22:23:08.196615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.596 [2024-11-17 22:23:08.200602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.596 [2024-11-17 22:23:08.200730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-11-17 22:23:08.200751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.204752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.204973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.204994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.208856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.208969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.208988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.213013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.213094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.213114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.217129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.217288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.217308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.221219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.221386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.221406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.225400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.225579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.225599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.229537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.229682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.229702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.233574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.233674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.233694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.237798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.237957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.237978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.241852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.857 [2024-11-17 22:23:08.241979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.857 [2024-11-17 22:23:08.242032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.857 [2024-11-17 22:23:08.245909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.245983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.246027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.250070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.250231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.250252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.254183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.254424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.254449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.258469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.258639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.258660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.262519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.262648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.262667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.266579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.266675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.266701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.270675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.270843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.270863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.274617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.274726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.274748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.278719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.278834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.278854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.282720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.282874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.282894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.286754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.287010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.287064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.290867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.290983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.291004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.294972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.295138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.295159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.299024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.299110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.299130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.303114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.303272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.303293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.307086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.307174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.307194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.311144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.311220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.311239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.315251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.315397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.315417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.319386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.319586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.319613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.323471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.323703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.323767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.327553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.327670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.327690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.331704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.331825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.331846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.335775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.335902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.335922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.339791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.339882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.339903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.343832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.343904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.343924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.347910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.348056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.348076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.351914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.858 [2024-11-17 22:23:08.352154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-11-17 22:23:08.352174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-11-17 22:23:08.355900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.356087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.356108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.359949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.360057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.360077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.363958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.364082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.364102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.368097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.368235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.368255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.372125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.372201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.372221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.376128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.376213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.376233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.380182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.380335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.380355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.384255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.384453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.384473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.388361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.388540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.388560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.392468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.392595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.392614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.396586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.396711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.396731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.400799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.400923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.400950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.404839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.404949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.404970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.408858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.408943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.408963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.412983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.413129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.413148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.417037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.417282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.417308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.421068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.421178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.421198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.425196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.425328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.425348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.429368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.429460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.429480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.433446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.433618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.433639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.437546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.437656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.437675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.441703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.441827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.441848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.445837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.446023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.446060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.449986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.450303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.450354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.454112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.454238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.454262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.458348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.458545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.458565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.462410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.859 [2024-11-17 22:23:08.462487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-11-17 22:23:08.462507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-11-17 22:23:08.466445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:11.860 [2024-11-17 22:23:08.466693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-11-17 22:23:08.466729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.120 [2024-11-17 22:23:08.470485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.470594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.470613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.474471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.474602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.474623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.478739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.478910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.478930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.482690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.482915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.482950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.486787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.486971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.486992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.490765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.490890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.490909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.494712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.494810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.494831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.498844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.498966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.498986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.502852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.502949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.502969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.506821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.506907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.506927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.510909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.511061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.511082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.514842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.515039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.515059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.518929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.519092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.519112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.522889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.523008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.523027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.526938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.527016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.527036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.530953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.531074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.531094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.534936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.535015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.535036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.538955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.539029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.539049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.543031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.543178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.543198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.547010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.547314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.547340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.551075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.551165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.551184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.555222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.555346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.555365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.559194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.559306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.559325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.563291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.563412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.563432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.567274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.567376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.567396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.571385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.571465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.571485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.575515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.121 [2024-11-17 22:23:08.575659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-11-17 22:23:08.575679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-11-17 22:23:08.579730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.579979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.580005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.583942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.584074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.584094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.588018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.588139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.588159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.592096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.592168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.592188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.596159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.596280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.596300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.600207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.600316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.600337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.604324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.604410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.604430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.608440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.608590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.608610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.612443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.612709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.612729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.616619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.616764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.616785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.620732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.620853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.620873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.624857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.624945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.624966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.628900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.629059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.629078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.632928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.633047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.633068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.637022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.637108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.637129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.641249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.641413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.641434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.645323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.645591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.645623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.649639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.649864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.649901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.654098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.654214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.654234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.658357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.658475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.658494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.663209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.663357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.663377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.667784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.667901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.667921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.672466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.672539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.672559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.676996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.677143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.677163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.681518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.681805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.681831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.685991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.686112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.686133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.690474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.122 [2024-11-17 22:23:08.690580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-11-17 22:23:08.690599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-11-17 22:23:08.694784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.123 [2024-11-17 22:23:08.694861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-11-17 22:23:08.694881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.123 [2024-11-17 22:23:08.699045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.123 [2024-11-17 22:23:08.699228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-11-17 22:23:08.699255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.123 [2024-11-17 22:23:08.703587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.123 [2024-11-17 22:23:08.703689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-11-17 22:23:08.703709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.123 [2024-11-17 22:23:08.707937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.123 [2024-11-17 22:23:08.708012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-11-17 22:23:08.708032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.123 [2024-11-17 22:23:08.712218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.123 [2024-11-17 22:23:08.712363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-11-17 22:23:08.712383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.123 [2024-11-17 22:23:08.716236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.123 [2024-11-17 22:23:08.716419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-11-17 22:23:08.716439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.123 [2024-11-17 22:23:08.720724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.123 [2024-11-17 22:23:08.720932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-11-17 22:23:08.720952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.123 [2024-11-17 22:23:08.724925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.123 [2024-11-17 22:23:08.725055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-11-17 22:23:08.725074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.123 [2024-11-17 22:23:08.729092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.123 [2024-11-17 22:23:08.729167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-11-17 22:23:08.729186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.382 [2024-11-17 22:23:08.733329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.382 [2024-11-17 22:23:08.733483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-11-17 22:23:08.733518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.382 [2024-11-17 22:23:08.737447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.382 [2024-11-17 22:23:08.737557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-11-17 22:23:08.737578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.382 [2024-11-17 22:23:08.741775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.382 [2024-11-17 22:23:08.741888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-11-17 22:23:08.741908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.382 [2024-11-17 22:23:08.746176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1877a90) with pdu=0x2000190fef90 00:23:12.382 [2024-11-17 22:23:08.746304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-11-17 22:23:08.746326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.382 00:23:12.382 Latency(us) 00:23:12.382 [2024-11-17T22:23:08.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.382 [2024-11-17T22:23:08.997Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:12.382 nvme0n1 : 2.00 7451.29 931.41 0.00 0.00 2142.61 1668.19 5093.93 00:23:12.382 [2024-11-17T22:23:08.997Z] =================================================================================================================== 00:23:12.382 [2024-11-17T22:23:08.997Z] Total : 7451.29 931.41 0.00 0.00 2142.61 1668.19 5093.93 00:23:12.382 0 00:23:12.382 22:23:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:12.382 22:23:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:12.382 22:23:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:12.382 22:23:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:12.382 | .driver_specific 00:23:12.382 | .nvme_error 00:23:12.382 | .status_code 00:23:12.382 | .command_transient_transport_error' 00:23:12.641 22:23:09 -- host/digest.sh@71 -- # (( 481 > 0 )) 00:23:12.641 22:23:09 -- host/digest.sh@73 -- # killprocess 87402 00:23:12.641 22:23:09 -- common/autotest_common.sh@936 -- # '[' -z 87402 ']' 00:23:12.641 22:23:09 -- common/autotest_common.sh@940 -- # kill -0 87402 00:23:12.641 22:23:09 -- common/autotest_common.sh@941 -- # uname 00:23:12.641 22:23:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.641 22:23:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87402 00:23:12.641 22:23:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:12.641 22:23:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:12.641 killing process with pid 87402 00:23:12.641 22:23:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87402' 00:23:12.641 Received shutdown signal, test time was about 2.000000 seconds 00:23:12.641 00:23:12.641 Latency(us) 00:23:12.641 [2024-11-17T22:23:09.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.641 [2024-11-17T22:23:09.256Z] =================================================================================================================== 00:23:12.641 [2024-11-17T22:23:09.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.641 22:23:09 -- common/autotest_common.sh@955 -- # kill 87402 00:23:12.641 22:23:09 -- common/autotest_common.sh@960 -- # wait 87402 00:23:12.899 22:23:09 -- host/digest.sh@115 -- # killprocess 87086 00:23:12.899 22:23:09 -- common/autotest_common.sh@936 -- # '[' -z 87086 ']' 00:23:12.899 22:23:09 -- common/autotest_common.sh@940 -- # kill -0 87086 00:23:12.899 22:23:09 -- common/autotest_common.sh@941 -- # uname 00:23:12.899 22:23:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.899 22:23:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87086 00:23:12.899 22:23:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:12.899 22:23:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:12.899 killing process with pid 87086 00:23:12.899 22:23:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87086' 00:23:12.899 22:23:09 -- common/autotest_common.sh@955 -- # kill 87086 00:23:12.899 22:23:09 -- common/autotest_common.sh@960 -- # wait 87086 00:23:13.156 00:23:13.156 real 0m18.693s 00:23:13.156 user 0m34.347s 00:23:13.156 sys 0m5.588s 00:23:13.156 22:23:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:13.156 22:23:09 -- common/autotest_common.sh@10 -- # set +x 00:23:13.156 ************************************ 00:23:13.156 END TEST nvmf_digest_error 00:23:13.156 ************************************ 00:23:13.156 22:23:09 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:13.156 22:23:09 -- host/digest.sh@139 -- # nvmftestfini 00:23:13.156 22:23:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:13.156 22:23:09 -- nvmf/common.sh@116 -- # sync 00:23:13.156 22:23:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:13.156 22:23:09 -- nvmf/common.sh@119 -- # set +e 00:23:13.156 22:23:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:13.156 22:23:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:13.156 rmmod nvme_tcp 00:23:13.156 rmmod nvme_fabrics 00:23:13.156 rmmod nvme_keyring 00:23:13.156 22:23:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:13.156 22:23:09 -- nvmf/common.sh@123 -- # set -e 00:23:13.156 22:23:09 -- nvmf/common.sh@124 -- # return 0 00:23:13.156 22:23:09 -- nvmf/common.sh@477 -- # '[' -n 87086 ']' 00:23:13.156 22:23:09 -- nvmf/common.sh@478 -- # killprocess 87086 00:23:13.156 22:23:09 -- common/autotest_common.sh@936 -- # '[' -z 87086 ']' 00:23:13.156 22:23:09 -- common/autotest_common.sh@940 -- # kill -0 87086 00:23:13.156 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87086) - No such process 00:23:13.156 Process with pid 87086 is not found 00:23:13.156 22:23:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87086 is not found' 00:23:13.156 22:23:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:13.156 22:23:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:13.156 22:23:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:13.156 22:23:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.156 22:23:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:13.156 22:23:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.156 22:23:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.156 22:23:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.415 22:23:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:13.415 00:23:13.415 real 0m38.470s 00:23:13.415 user 1m9.210s 00:23:13.415 sys 0m11.493s 00:23:13.415 22:23:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:13.415 22:23:09 -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 ************************************ 00:23:13.415 END TEST nvmf_digest 00:23:13.415 ************************************ 00:23:13.415 22:23:09 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:13.415 22:23:09 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:13.415 22:23:09 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:13.415 22:23:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:13.415 22:23:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:13.415 22:23:09 -- common/autotest_common.sh@10 -- # set +x 00:23:13.415 ************************************ 00:23:13.415 START TEST nvmf_mdns_discovery 00:23:13.415 ************************************ 00:23:13.415 22:23:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:13.415 * Looking for test storage... 00:23:13.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:13.415 22:23:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:13.415 22:23:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:13.415 22:23:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:13.415 22:23:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:13.415 22:23:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:13.415 22:23:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:13.415 22:23:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:13.415 22:23:10 -- scripts/common.sh@335 -- # IFS=.-: 00:23:13.415 22:23:10 -- scripts/common.sh@335 -- # read -ra ver1 00:23:13.415 22:23:10 -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.415 22:23:10 -- scripts/common.sh@336 -- # read -ra ver2 00:23:13.415 22:23:10 -- scripts/common.sh@337 -- # local 'op=<' 00:23:13.415 22:23:10 -- scripts/common.sh@339 -- # ver1_l=2 00:23:13.415 22:23:10 -- scripts/common.sh@340 -- # ver2_l=1 00:23:13.415 22:23:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:13.415 22:23:10 -- scripts/common.sh@343 -- # case "$op" in 00:23:13.415 22:23:10 -- scripts/common.sh@344 -- # : 1 00:23:13.415 22:23:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:13.415 22:23:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.675 22:23:10 -- scripts/common.sh@364 -- # decimal 1 00:23:13.675 22:23:10 -- scripts/common.sh@352 -- # local d=1 00:23:13.675 22:23:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.675 22:23:10 -- scripts/common.sh@354 -- # echo 1 00:23:13.675 22:23:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:13.675 22:23:10 -- scripts/common.sh@365 -- # decimal 2 00:23:13.675 22:23:10 -- scripts/common.sh@352 -- # local d=2 00:23:13.675 22:23:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.675 22:23:10 -- scripts/common.sh@354 -- # echo 2 00:23:13.675 22:23:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:13.675 22:23:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:13.675 22:23:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:13.675 22:23:10 -- scripts/common.sh@367 -- # return 0 00:23:13.675 22:23:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.675 22:23:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:13.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.675 --rc genhtml_branch_coverage=1 00:23:13.675 --rc genhtml_function_coverage=1 00:23:13.675 --rc genhtml_legend=1 00:23:13.675 --rc geninfo_all_blocks=1 00:23:13.675 --rc geninfo_unexecuted_blocks=1 00:23:13.675 00:23:13.675 ' 00:23:13.675 22:23:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:13.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.675 --rc genhtml_branch_coverage=1 00:23:13.675 --rc genhtml_function_coverage=1 00:23:13.675 --rc genhtml_legend=1 00:23:13.675 --rc geninfo_all_blocks=1 00:23:13.675 --rc geninfo_unexecuted_blocks=1 00:23:13.675 00:23:13.675 ' 00:23:13.675 22:23:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:13.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.675 --rc genhtml_branch_coverage=1 00:23:13.675 --rc genhtml_function_coverage=1 00:23:13.675 --rc genhtml_legend=1 00:23:13.675 --rc geninfo_all_blocks=1 00:23:13.675 --rc geninfo_unexecuted_blocks=1 00:23:13.675 00:23:13.675 ' 00:23:13.675 22:23:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:13.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.675 --rc genhtml_branch_coverage=1 00:23:13.675 --rc genhtml_function_coverage=1 00:23:13.675 --rc genhtml_legend=1 00:23:13.675 --rc geninfo_all_blocks=1 00:23:13.675 --rc geninfo_unexecuted_blocks=1 00:23:13.675 00:23:13.675 ' 00:23:13.675 22:23:10 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:13.675 22:23:10 -- nvmf/common.sh@7 -- # uname -s 00:23:13.675 22:23:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.675 22:23:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.675 22:23:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.675 22:23:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.675 22:23:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.675 22:23:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.675 22:23:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.675 22:23:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.675 22:23:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.675 22:23:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.675 22:23:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:23:13.675 22:23:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:23:13.675 22:23:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.675 22:23:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.675 22:23:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:13.675 22:23:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:13.676 22:23:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.676 22:23:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.676 22:23:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.676 22:23:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.676 22:23:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.676 22:23:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.676 22:23:10 -- paths/export.sh@5 -- # export PATH 00:23:13.676 22:23:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.676 22:23:10 -- nvmf/common.sh@46 -- # : 0 00:23:13.676 22:23:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:13.676 22:23:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:13.676 22:23:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:13.676 22:23:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.676 22:23:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.676 22:23:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:13.676 22:23:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:13.676 22:23:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:13.676 22:23:10 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:13.676 22:23:10 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:13.676 22:23:10 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:13.676 22:23:10 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:13.676 22:23:10 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:13.676 22:23:10 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:13.676 22:23:10 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:13.676 22:23:10 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:13.676 22:23:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:13.676 22:23:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.676 22:23:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:13.676 22:23:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:13.676 22:23:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:13.676 22:23:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.676 22:23:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.676 22:23:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.676 22:23:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:13.676 22:23:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:13.676 22:23:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:13.676 22:23:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:13.676 22:23:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:13.676 22:23:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:13.676 22:23:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.676 22:23:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.676 22:23:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:13.676 22:23:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:13.676 22:23:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:13.676 22:23:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:13.676 22:23:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:13.676 22:23:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.676 22:23:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:13.676 22:23:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:13.676 22:23:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:13.676 22:23:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:13.676 22:23:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:13.676 22:23:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:13.676 Cannot find device "nvmf_tgt_br" 00:23:13.676 22:23:10 -- nvmf/common.sh@154 -- # true 00:23:13.676 22:23:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:13.676 Cannot find device "nvmf_tgt_br2" 00:23:13.676 22:23:10 -- nvmf/common.sh@155 -- # true 00:23:13.676 22:23:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:13.676 22:23:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:13.676 Cannot find device "nvmf_tgt_br" 00:23:13.676 22:23:10 -- nvmf/common.sh@157 -- # true 00:23:13.676 22:23:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:13.676 Cannot find device "nvmf_tgt_br2" 00:23:13.676 22:23:10 -- nvmf/common.sh@158 -- # true 00:23:13.676 22:23:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:13.676 22:23:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:13.676 22:23:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:13.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.676 22:23:10 -- nvmf/common.sh@161 -- # true 00:23:13.676 22:23:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:13.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.676 22:23:10 -- nvmf/common.sh@162 -- # true 00:23:13.676 22:23:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:13.676 22:23:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:13.676 22:23:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:13.676 22:23:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:13.676 22:23:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:13.676 22:23:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:13.676 22:23:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:13.676 22:23:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:13.676 22:23:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:13.676 22:23:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:13.676 22:23:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:13.676 22:23:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:13.676 22:23:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:13.676 22:23:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:13.935 22:23:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:13.935 22:23:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:13.935 22:23:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:13.935 22:23:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:13.935 22:23:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:13.935 22:23:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:13.935 22:23:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:13.935 22:23:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:13.935 22:23:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:13.935 22:23:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:13.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:23:13.935 00:23:13.935 --- 10.0.0.2 ping statistics --- 00:23:13.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.935 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:23:13.935 22:23:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:13.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:13.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:23:13.935 00:23:13.935 --- 10.0.0.3 ping statistics --- 00:23:13.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.935 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:13.935 22:23:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:13.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:13.935 00:23:13.935 --- 10.0.0.1 ping statistics --- 00:23:13.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.935 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:13.935 22:23:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.935 22:23:10 -- nvmf/common.sh@421 -- # return 0 00:23:13.935 22:23:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:13.935 22:23:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.935 22:23:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:13.935 22:23:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:13.935 22:23:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.935 22:23:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:13.935 22:23:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:13.935 22:23:10 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:13.935 22:23:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:13.935 22:23:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.935 22:23:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.935 22:23:10 -- nvmf/common.sh@469 -- # nvmfpid=87705 00:23:13.935 22:23:10 -- nvmf/common.sh@470 -- # waitforlisten 87705 00:23:13.935 22:23:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:13.935 22:23:10 -- common/autotest_common.sh@829 -- # '[' -z 87705 ']' 00:23:13.935 22:23:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.935 22:23:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.935 22:23:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.935 22:23:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.935 22:23:10 -- common/autotest_common.sh@10 -- # set +x 00:23:13.935 [2024-11-17 22:23:10.460574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:13.935 [2024-11-17 22:23:10.460633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.194 [2024-11-17 22:23:10.597597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.194 [2024-11-17 22:23:10.705845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:14.194 [2024-11-17 22:23:10.706040] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.194 [2024-11-17 22:23:10.706064] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.194 [2024-11-17 22:23:10.706077] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.194 [2024-11-17 22:23:10.706125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.129 22:23:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.129 22:23:11 -- common/autotest_common.sh@862 -- # return 0 00:23:15.129 22:23:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:15.129 22:23:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.129 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.129 22:23:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.129 22:23:11 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:15.129 22:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.129 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.129 22:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.129 22:23:11 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:15.129 22:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.129 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.129 22:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.129 22:23:11 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.129 22:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.129 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.129 [2024-11-17 22:23:11.640949] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.129 22:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.129 22:23:11 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:15.129 22:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.130 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.130 [2024-11-17 22:23:11.649073] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:15.130 22:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.130 22:23:11 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:15.130 22:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.130 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.130 null0 00:23:15.130 22:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.130 22:23:11 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:15.130 22:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.130 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.130 null1 00:23:15.130 22:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.130 22:23:11 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:15.130 22:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.130 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.130 null2 00:23:15.130 22:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.130 22:23:11 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:15.130 22:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.130 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.130 null3 00:23:15.130 22:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.130 22:23:11 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:15.130 22:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.130 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.130 22:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.130 22:23:11 -- host/mdns_discovery.sh@47 -- # hostpid=87755 00:23:15.130 22:23:11 -- host/mdns_discovery.sh@48 -- # waitforlisten 87755 /tmp/host.sock 00:23:15.130 22:23:11 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:15.130 22:23:11 -- common/autotest_common.sh@829 -- # '[' -z 87755 ']' 00:23:15.130 22:23:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:15.130 22:23:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.130 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:15.130 22:23:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:15.130 22:23:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.130 22:23:11 -- common/autotest_common.sh@10 -- # set +x 00:23:15.388 [2024-11-17 22:23:11.755163] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:15.388 [2024-11-17 22:23:11.755265] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87755 ] 00:23:15.388 [2024-11-17 22:23:11.895297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.388 [2024-11-17 22:23:11.990207] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:15.388 [2024-11-17 22:23:11.990422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.323 22:23:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.323 22:23:12 -- common/autotest_common.sh@862 -- # return 0 00:23:16.323 22:23:12 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:16.323 22:23:12 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:16.323 22:23:12 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:16.323 22:23:12 -- host/mdns_discovery.sh@57 -- # avahipid=87785 00:23:16.323 22:23:12 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:16.323 22:23:12 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:16.323 22:23:12 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:16.323 Process 1061 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:16.323 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:16.323 Successfully dropped root privileges. 00:23:16.323 avahi-daemon 0.8 starting up. 00:23:16.323 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:16.323 Successfully called chroot(). 00:23:16.323 Successfully dropped remaining capabilities. 00:23:16.323 No service file found in /etc/avahi/services. 00:23:17.258 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:17.258 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:17.258 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:17.258 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:17.258 Network interface enumeration completed. 00:23:17.258 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:17.258 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:17.258 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:17.258 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:17.258 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 834174564. 00:23:17.259 22:23:13 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:17.259 22:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.259 22:23:13 -- common/autotest_common.sh@10 -- # set +x 00:23:17.259 22:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.259 22:23:13 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:17.259 22:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.259 22:23:13 -- common/autotest_common.sh@10 -- # set +x 00:23:17.259 22:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.259 22:23:13 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:17.259 22:23:13 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:17.259 22:23:13 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.259 22:23:13 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:17.259 22:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.259 22:23:13 -- host/mdns_discovery.sh@68 -- # xargs 00:23:17.259 22:23:13 -- host/mdns_discovery.sh@68 -- # sort 00:23:17.259 22:23:13 -- common/autotest_common.sh@10 -- # set +x 00:23:17.259 22:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.517 22:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.517 22:23:13 -- common/autotest_common.sh@10 -- # set +x 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@64 -- # sort 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@64 -- # xargs 00:23:17.517 22:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:17.517 22:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.517 22:23:13 -- common/autotest_common.sh@10 -- # set +x 00:23:17.517 22:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:17.517 22:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@68 -- # sort 00:23:17.517 22:23:13 -- common/autotest_common.sh@10 -- # set +x 00:23:17.517 22:23:13 -- host/mdns_discovery.sh@68 -- # xargs 00:23:17.517 22:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@64 -- # sort 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@64 -- # xargs 00:23:17.517 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.517 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.517 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:17.517 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.517 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.517 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:17.517 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.517 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@68 -- # sort 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@68 -- # xargs 00:23:17.517 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@64 -- # sort 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:17.517 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.517 22:23:14 -- host/mdns_discovery.sh@64 -- # xargs 00:23:17.517 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.517 [2024-11-17 22:23:14.122393] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:17.776 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.776 22:23:14 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:17.776 22:23:14 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.776 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.776 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.776 [2024-11-17 22:23:14.178115] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.776 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.776 22:23:14 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:17.776 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.776 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.776 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.776 22:23:14 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:17.776 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.776 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.776 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.776 22:23:14 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:17.776 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.776 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.776 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.776 22:23:14 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:17.776 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.776 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.776 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.776 22:23:14 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:17.776 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.776 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.776 [2024-11-17 22:23:14.217918] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:17.776 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.776 22:23:14 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:17.776 22:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.776 22:23:14 -- common/autotest_common.sh@10 -- # set +x 00:23:17.776 [2024-11-17 22:23:14.225917] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.776 22:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.776 22:23:14 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=87842 00:23:17.777 22:23:14 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:17.777 22:23:14 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:18.712 [2024-11-17 22:23:15.022391] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:18.712 Established under name 'CDC' 00:23:18.970 [2024-11-17 22:23:15.422418] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:18.970 [2024-11-17 22:23:15.422451] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:18.970 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:18.970 cookie is 0 00:23:18.970 is_local: 1 00:23:18.970 our_own: 0 00:23:18.970 wide_area: 0 00:23:18.970 multicast: 1 00:23:18.970 cached: 1 00:23:18.970 [2024-11-17 22:23:15.522397] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:18.970 [2024-11-17 22:23:15.522422] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:18.970 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:18.970 cookie is 0 00:23:18.970 is_local: 1 00:23:18.970 our_own: 0 00:23:18.970 wide_area: 0 00:23:18.970 multicast: 1 00:23:18.970 cached: 1 00:23:19.904 [2024-11-17 22:23:16.435274] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:19.904 [2024-11-17 22:23:16.435308] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:19.904 [2024-11-17 22:23:16.435328] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:20.163 [2024-11-17 22:23:16.521373] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:20.163 [2024-11-17 22:23:16.534889] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:20.163 [2024-11-17 22:23:16.534914] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:20.163 [2024-11-17 22:23:16.534945] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.163 [2024-11-17 22:23:16.587404] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:20.163 [2024-11-17 22:23:16.587436] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:20.163 [2024-11-17 22:23:16.621584] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:20.163 [2024-11-17 22:23:16.676147] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:20.163 [2024-11-17 22:23:16.676177] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:22.698 22:23:19 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:22.698 22:23:19 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:22.698 22:23:19 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:22.698 22:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.698 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:22.698 22:23:19 -- host/mdns_discovery.sh@80 -- # xargs 00:23:22.698 22:23:19 -- host/mdns_discovery.sh@80 -- # sort 00:23:22.699 22:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.699 22:23:19 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:22.699 22:23:19 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:22.699 22:23:19 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.699 22:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.699 22:23:19 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:22.699 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:22.699 22:23:19 -- host/mdns_discovery.sh@76 -- # xargs 00:23:22.699 22:23:19 -- host/mdns_discovery.sh@76 -- # sort 00:23:22.699 22:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@68 -- # sort 00:23:22.959 22:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.959 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@68 -- # xargs 00:23:22.959 22:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.959 22:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:22.959 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@64 -- # xargs 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@64 -- # sort 00:23:22.959 22:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:22.959 22:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.959 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@72 -- # xargs 00:23:22.959 22:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:22.959 22:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.959 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@72 -- # xargs 00:23:22.959 22:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:22.959 22:23:19 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:22.959 22:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.959 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:22.959 22:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.221 22:23:19 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:23.221 22:23:19 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:23.221 22:23:19 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:23.221 22:23:19 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:23.221 22:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.221 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:23.221 22:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.221 22:23:19 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:23.221 22:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.221 22:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:23.221 22:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.221 22:23:19 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:24.157 22:23:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.157 22:23:20 -- common/autotest_common.sh@10 -- # set +x 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@64 -- # sort 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@64 -- # xargs 00:23:24.157 22:23:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:24.157 22:23:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.157 22:23:20 -- common/autotest_common.sh@10 -- # set +x 00:23:24.157 22:23:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:24.157 22:23:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.157 22:23:20 -- common/autotest_common.sh@10 -- # set +x 00:23:24.157 [2024-11-17 22:23:20.712386] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.157 [2024-11-17 22:23:20.713006] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:24.157 [2024-11-17 22:23:20.713035] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:24.157 [2024-11-17 22:23:20.713088] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:24.157 [2024-11-17 22:23:20.713119] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:24.157 22:23:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:24.157 22:23:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.157 22:23:20 -- common/autotest_common.sh@10 -- # set +x 00:23:24.157 [2024-11-17 22:23:20.720249] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:24.157 [2024-11-17 22:23:20.721025] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:24.157 [2024-11-17 22:23:20.721144] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:24.157 22:23:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.157 22:23:20 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:24.416 [2024-11-17 22:23:20.851121] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:24.416 [2024-11-17 22:23:20.852134] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:24.416 [2024-11-17 22:23:20.908607] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:24.416 [2024-11-17 22:23:20.908824] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:24.416 [2024-11-17 22:23:20.909037] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:24.416 [2024-11-17 22:23:20.909249] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:24.416 [2024-11-17 22:23:20.909495] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:24.416 [2024-11-17 22:23:20.909626] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:24.416 [2024-11-17 22:23:20.909786] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:24.416 [2024-11-17 22:23:20.909903] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:24.416 [2024-11-17 22:23:20.954454] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:24.416 [2024-11-17 22:23:20.954473] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:24.416 [2024-11-17 22:23:20.955445] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:24.416 [2024-11-17 22:23:20.955595] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:25.351 22:23:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.351 22:23:21 -- common/autotest_common.sh@10 -- # set +x 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@68 -- # xargs 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@68 -- # sort 00:23:25.351 22:23:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.351 22:23:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.351 22:23:21 -- common/autotest_common.sh@10 -- # set +x 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@64 -- # sort 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@64 -- # xargs 00:23:25.351 22:23:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:25.351 22:23:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.351 22:23:21 -- common/autotest_common.sh@10 -- # set +x 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@72 -- # xargs 00:23:25.351 22:23:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:25.351 22:23:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.351 22:23:21 -- common/autotest_common.sh@10 -- # set +x 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:25.351 22:23:21 -- host/mdns_discovery.sh@72 -- # xargs 00:23:25.351 22:23:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.612 22:23:21 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:25.612 22:23:21 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:25.612 22:23:21 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:25.612 22:23:21 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:25.612 22:23:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.612 22:23:21 -- common/autotest_common.sh@10 -- # set +x 00:23:25.612 22:23:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.612 22:23:22 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:25.612 22:23:22 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:25.612 22:23:22 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:25.612 22:23:22 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:25.612 22:23:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.612 22:23:22 -- common/autotest_common.sh@10 -- # set +x 00:23:25.612 [2024-11-17 22:23:22.025444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.612 [2024-11-17 22:23:22.025499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.612 [2024-11-17 22:23:22.025530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.612 [2024-11-17 22:23:22.025540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.612 [2024-11-17 22:23:22.025550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.612 [2024-11-17 22:23:22.025558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.612 [2024-11-17 22:23:22.025568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.612 [2024-11-17 22:23:22.025577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.612 [2024-11-17 22:23:22.025586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.612 [2024-11-17 22:23:22.025820] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:25.612 [2024-11-17 22:23:22.025844] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.612 [2024-11-17 22:23:22.025882] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:25.612 [2024-11-17 22:23:22.025898] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:25.612 22:23:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.612 22:23:22 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:25.612 22:23:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.612 22:23:22 -- common/autotest_common.sh@10 -- # set +x 00:23:25.612 [2024-11-17 22:23:22.033822] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:25.612 [2024-11-17 22:23:22.033929] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:25.612 [2024-11-17 22:23:22.035400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.612 [2024-11-17 22:23:22.037913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.612 [2024-11-17 22:23:22.037961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.612 [2024-11-17 22:23:22.037975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.612 [2024-11-17 22:23:22.037985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.612 [2024-11-17 22:23:22.038036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.612 [2024-11-17 22:23:22.038051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.612 [2024-11-17 22:23:22.038062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.612 [2024-11-17 22:23:22.038071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.612 [2024-11-17 22:23:22.038080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.612 22:23:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.612 22:23:22 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:25.612 [2024-11-17 22:23:22.045416] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.612 [2024-11-17 22:23:22.045552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.612 [2024-11-17 22:23:22.045620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.612 [2024-11-17 22:23:22.045639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.612 [2024-11-17 22:23:22.045651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.612 [2024-11-17 22:23:22.045668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.612 [2024-11-17 22:23:22.045699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.612 [2024-11-17 22:23:22.045725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.612 [2024-11-17 22:23:22.045736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.612 [2024-11-17 22:23:22.045770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.612 [2024-11-17 22:23:22.047866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.612 [2024-11-17 22:23:22.055508] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.612 [2024-11-17 22:23:22.055625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.612 [2024-11-17 22:23:22.055675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.612 [2024-11-17 22:23:22.055693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.612 [2024-11-17 22:23:22.055703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.612 [2024-11-17 22:23:22.055720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.612 [2024-11-17 22:23:22.055734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.612 [2024-11-17 22:23:22.055743] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.612 [2024-11-17 22:23:22.055784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.612 [2024-11-17 22:23:22.055818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.612 [2024-11-17 22:23:22.057877] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.612 [2024-11-17 22:23:22.057979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.612 [2024-11-17 22:23:22.058042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.612 [2024-11-17 22:23:22.058062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.612 [2024-11-17 22:23:22.058073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.612 [2024-11-17 22:23:22.058090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.612 [2024-11-17 22:23:22.058104] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.612 [2024-11-17 22:23:22.058113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.612 [2024-11-17 22:23:22.058122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.612 [2024-11-17 22:23:22.058137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.612 [2024-11-17 22:23:22.065592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.612 [2024-11-17 22:23:22.065707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.612 [2024-11-17 22:23:22.065771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.612 [2024-11-17 22:23:22.065792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.613 [2024-11-17 22:23:22.065803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.065819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.065834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.613 [2024-11-17 22:23:22.065843] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.613 [2024-11-17 22:23:22.065852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.613 [2024-11-17 22:23:22.065900] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.613 [2024-11-17 22:23:22.067945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.613 [2024-11-17 22:23:22.068057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.068107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.068125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.613 [2024-11-17 22:23:22.068135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.068151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.068165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.613 [2024-11-17 22:23:22.068174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.613 [2024-11-17 22:23:22.068183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.613 [2024-11-17 22:23:22.068213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.613 [2024-11-17 22:23:22.075676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.613 [2024-11-17 22:23:22.075805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.075855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.075873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.613 [2024-11-17 22:23:22.075884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.075900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.075930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.613 [2024-11-17 22:23:22.075972] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.613 [2024-11-17 22:23:22.075983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.613 [2024-11-17 22:23:22.075999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.613 [2024-11-17 22:23:22.078035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.613 [2024-11-17 22:23:22.078138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.078190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.078208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.613 [2024-11-17 22:23:22.078219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.078235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.078250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.613 [2024-11-17 22:23:22.078259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.613 [2024-11-17 22:23:22.078269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.613 [2024-11-17 22:23:22.078300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.613 [2024-11-17 22:23:22.085775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.613 [2024-11-17 22:23:22.085881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.085932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.085950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.613 [2024-11-17 22:23:22.085961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.085978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.086022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.613 [2024-11-17 22:23:22.086051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.613 [2024-11-17 22:23:22.086061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.613 [2024-11-17 22:23:22.086077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.613 [2024-11-17 22:23:22.088103] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.613 [2024-11-17 22:23:22.088219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.088269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.088288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.613 [2024-11-17 22:23:22.088299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.088316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.088330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.613 [2024-11-17 22:23:22.088339] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.613 [2024-11-17 22:23:22.088348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.613 [2024-11-17 22:23:22.088379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.613 [2024-11-17 22:23:22.095845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.613 [2024-11-17 22:23:22.095961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.096009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.096028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.613 [2024-11-17 22:23:22.096038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.096054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.096084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.613 [2024-11-17 22:23:22.096095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.613 [2024-11-17 22:23:22.096121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.613 [2024-11-17 22:23:22.096152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.613 [2024-11-17 22:23:22.098188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.613 [2024-11-17 22:23:22.098289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.098353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.098372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.613 [2024-11-17 22:23:22.098383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.098398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.098412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.613 [2024-11-17 22:23:22.098421] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.613 [2024-11-17 22:23:22.098430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.613 [2024-11-17 22:23:22.098477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.613 [2024-11-17 22:23:22.105931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.613 [2024-11-17 22:23:22.106038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.106087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.106104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.613 [2024-11-17 22:23:22.106115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.106131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.106162] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.613 [2024-11-17 22:23:22.106174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.613 [2024-11-17 22:23:22.106183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.613 [2024-11-17 22:23:22.106213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.613 [2024-11-17 22:23:22.108256] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.613 [2024-11-17 22:23:22.108368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.108416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.613 [2024-11-17 22:23:22.108434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.613 [2024-11-17 22:23:22.108444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.613 [2024-11-17 22:23:22.108460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.613 [2024-11-17 22:23:22.108475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.108483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.108492] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.614 [2024-11-17 22:23:22.108522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.614 [2024-11-17 22:23:22.116001] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.614 [2024-11-17 22:23:22.116115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.116163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.116181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.614 [2024-11-17 22:23:22.116191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.614 [2024-11-17 22:23:22.116207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.614 [2024-11-17 22:23:22.116237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.116248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.116273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.614 [2024-11-17 22:23:22.116288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.614 [2024-11-17 22:23:22.118352] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.614 [2024-11-17 22:23:22.118464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.118512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.118530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.614 [2024-11-17 22:23:22.118541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.614 [2024-11-17 22:23:22.118557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.614 [2024-11-17 22:23:22.118571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.118579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.118588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.614 [2024-11-17 22:23:22.118618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.614 [2024-11-17 22:23:22.126070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.614 [2024-11-17 22:23:22.126182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.126233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.126251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.614 [2024-11-17 22:23:22.126262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.614 [2024-11-17 22:23:22.126278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.614 [2024-11-17 22:23:22.126311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.126338] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.126364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.614 [2024-11-17 22:23:22.126381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.614 [2024-11-17 22:23:22.128431] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.614 [2024-11-17 22:23:22.128543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.128592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.128610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.614 [2024-11-17 22:23:22.128621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.614 [2024-11-17 22:23:22.128637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.614 [2024-11-17 22:23:22.128651] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.128660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.128668] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.614 [2024-11-17 22:23:22.128699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.614 [2024-11-17 22:23:22.136147] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.614 [2024-11-17 22:23:22.136272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.136321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.136340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.614 [2024-11-17 22:23:22.136350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.614 [2024-11-17 22:23:22.136367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.614 [2024-11-17 22:23:22.136398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.136426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.136452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.614 [2024-11-17 22:23:22.136468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.614 [2024-11-17 22:23:22.138510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.614 [2024-11-17 22:23:22.138625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.138674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.138694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.614 [2024-11-17 22:23:22.138704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.614 [2024-11-17 22:23:22.138721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.614 [2024-11-17 22:23:22.138735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.138744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.138801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.614 [2024-11-17 22:23:22.138837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.614 [2024-11-17 22:23:22.146237] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.614 [2024-11-17 22:23:22.146377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.146427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.146445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.614 [2024-11-17 22:23:22.146456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.614 [2024-11-17 22:23:22.146472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.614 [2024-11-17 22:23:22.146504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.146531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.146541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.614 [2024-11-17 22:23:22.146556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.614 [2024-11-17 22:23:22.148592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.614 [2024-11-17 22:23:22.148704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.148766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.148787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.614 [2024-11-17 22:23:22.148798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.614 [2024-11-17 22:23:22.148815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.614 [2024-11-17 22:23:22.148829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.148838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.148846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.614 [2024-11-17 22:23:22.148861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.614 [2024-11-17 22:23:22.156340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.614 [2024-11-17 22:23:22.156455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.156504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.614 [2024-11-17 22:23:22.156522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889b70 with addr=10.0.0.2, port=4420 00:23:25.614 [2024-11-17 22:23:22.156532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889b70 is same with the state(5) to be set 00:23:25.614 [2024-11-17 22:23:22.156548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889b70 (9): Bad file descriptor 00:23:25.614 [2024-11-17 22:23:22.156579] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.614 [2024-11-17 22:23:22.156590] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.614 [2024-11-17 22:23:22.156616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.614 [2024-11-17 22:23:22.156631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.615 [2024-11-17 22:23:22.158673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.615 [2024-11-17 22:23:22.158798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.615 [2024-11-17 22:23:22.158849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.615 [2024-11-17 22:23:22.158867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x825410 with addr=10.0.0.3, port=4420 00:23:25.615 [2024-11-17 22:23:22.158878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825410 is same with the state(5) to be set 00:23:25.615 [2024-11-17 22:23:22.158895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x825410 (9): Bad file descriptor 00:23:25.615 [2024-11-17 22:23:22.158920] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.615 [2024-11-17 22:23:22.158931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.615 [2024-11-17 22:23:22.158956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.615 [2024-11-17 22:23:22.158971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.615 [2024-11-17 22:23:22.164579] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:25.615 [2024-11-17 22:23:22.164626] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:25.615 [2024-11-17 22:23:22.164647] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:25.615 [2024-11-17 22:23:22.165575] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:25.615 [2024-11-17 22:23:22.165616] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:25.615 [2024-11-17 22:23:22.165635] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.874 [2024-11-17 22:23:22.250678] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:25.874 [2024-11-17 22:23:22.251668] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:26.441 22:23:23 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:26.441 22:23:23 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:26.441 22:23:23 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.441 22:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.441 22:23:23 -- common/autotest_common.sh@10 -- # set +x 00:23:26.441 22:23:23 -- host/mdns_discovery.sh@68 -- # sort 00:23:26.441 22:23:23 -- host/mdns_discovery.sh@68 -- # xargs 00:23:26.700 22:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.700 22:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.700 22:23:23 -- common/autotest_common.sh@10 -- # set +x 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@64 -- # sort 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@64 -- # xargs 00:23:26.700 22:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:26.700 22:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.700 22:23:23 -- common/autotest_common.sh@10 -- # set +x 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@72 -- # xargs 00:23:26.700 22:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@72 -- # xargs 00:23:26.700 22:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.700 22:23:23 -- common/autotest_common.sh@10 -- # set +x 00:23:26.700 22:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:26.700 22:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.700 22:23:23 -- common/autotest_common.sh@10 -- # set +x 00:23:26.700 22:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:26.700 22:23:23 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:26.700 22:23:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.700 22:23:23 -- common/autotest_common.sh@10 -- # set +x 00:23:26.959 22:23:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.959 [2024-11-17 22:23:23.322409] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:26.959 22:23:23 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:27.895 22:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@80 -- # sort 00:23:27.895 22:23:24 -- common/autotest_common.sh@10 -- # set +x 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@80 -- # xargs 00:23:27.895 22:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@68 -- # sort 00:23:27.895 22:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@68 -- # xargs 00:23:27.895 22:23:24 -- common/autotest_common.sh@10 -- # set +x 00:23:27.895 22:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.895 22:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:27.895 22:23:24 -- common/autotest_common.sh@10 -- # set +x 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@64 -- # sort 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@64 -- # xargs 00:23:27.895 22:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:27.895 22:23:24 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:27.895 22:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.895 22:23:24 -- common/autotest_common.sh@10 -- # set +x 00:23:28.154 22:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.154 22:23:24 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:28.154 22:23:24 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:28.154 22:23:24 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:28.154 22:23:24 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:28.154 22:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.154 22:23:24 -- common/autotest_common.sh@10 -- # set +x 00:23:28.154 22:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.154 22:23:24 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:28.154 22:23:24 -- common/autotest_common.sh@650 -- # local es=0 00:23:28.154 22:23:24 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:28.154 22:23:24 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:28.154 22:23:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.154 22:23:24 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:28.154 22:23:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.154 22:23:24 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:28.154 22:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.155 22:23:24 -- common/autotest_common.sh@10 -- # set +x 00:23:28.155 [2024-11-17 22:23:24.561952] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:28.155 2024/11/17 22:23:24 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:28.155 request: 00:23:28.155 { 00:23:28.155 "method": "bdev_nvme_start_mdns_discovery", 00:23:28.155 "params": { 00:23:28.155 "name": "mdns", 00:23:28.155 "svcname": "_nvme-disc._http", 00:23:28.155 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:28.155 } 00:23:28.155 } 00:23:28.155 Got JSON-RPC error response 00:23:28.155 GoRPCClient: error on JSON-RPC call 00:23:28.155 22:23:24 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:28.155 22:23:24 -- common/autotest_common.sh@653 -- # es=1 00:23:28.155 22:23:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:28.155 22:23:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:28.155 22:23:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:28.155 22:23:24 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:28.413 [2024-11-17 22:23:24.950440] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:28.673 [2024-11-17 22:23:25.050439] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:28.673 [2024-11-17 22:23:25.150443] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:28.673 [2024-11-17 22:23:25.150611] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:28.673 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:28.673 cookie is 0 00:23:28.673 is_local: 1 00:23:28.673 our_own: 0 00:23:28.673 wide_area: 0 00:23:28.673 multicast: 1 00:23:28.673 cached: 1 00:23:28.673 [2024-11-17 22:23:25.250444] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:28.673 [2024-11-17 22:23:25.250626] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:28.673 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:28.673 cookie is 0 00:23:28.673 is_local: 1 00:23:28.673 our_own: 0 00:23:28.673 wide_area: 0 00:23:28.673 multicast: 1 00:23:28.673 cached: 1 00:23:29.609 [2024-11-17 22:23:26.156415] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:29.609 [2024-11-17 22:23:26.156565] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:29.609 [2024-11-17 22:23:26.156626] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:29.867 [2024-11-17 22:23:26.242519] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:29.867 [2024-11-17 22:23:26.256241] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.867 [2024-11-17 22:23:26.256386] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.867 [2024-11-17 22:23:26.256445] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.867 [2024-11-17 22:23:26.305398] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:29.867 [2024-11-17 22:23:26.305584] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:29.867 [2024-11-17 22:23:26.342138] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:29.867 [2024-11-17 22:23:26.400924] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:29.867 [2024-11-17 22:23:26.401107] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:33.223 22:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.223 22:23:29 -- common/autotest_common.sh@10 -- # set +x 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@80 -- # sort 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@80 -- # xargs 00:23:33.223 22:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@76 -- # sort 00:23:33.223 22:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.223 22:23:29 -- common/autotest_common.sh@10 -- # set +x 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@76 -- # xargs 00:23:33.223 22:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:33.223 22:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@64 -- # sort 00:23:33.223 22:23:29 -- common/autotest_common.sh@10 -- # set +x 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@64 -- # xargs 00:23:33.223 22:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:33.223 22:23:29 -- common/autotest_common.sh@650 -- # local es=0 00:23:33.223 22:23:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:33.223 22:23:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:33.223 22:23:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.223 22:23:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:33.223 22:23:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.223 22:23:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:33.223 22:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.223 22:23:29 -- common/autotest_common.sh@10 -- # set +x 00:23:33.223 [2024-11-17 22:23:29.748687] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:33.223 2024/11/17 22:23:29 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:33.223 request: 00:23:33.223 { 00:23:33.223 "method": "bdev_nvme_start_mdns_discovery", 00:23:33.223 "params": { 00:23:33.223 "name": "cdc", 00:23:33.223 "svcname": "_nvme-disc._tcp", 00:23:33.223 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:33.223 } 00:23:33.223 } 00:23:33.223 Got JSON-RPC error response 00:23:33.223 GoRPCClient: error on JSON-RPC call 00:23:33.223 22:23:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:33.223 22:23:29 -- common/autotest_common.sh@653 -- # es=1 00:23:33.223 22:23:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.223 22:23:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.223 22:23:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.223 22:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@76 -- # sort 00:23:33.223 22:23:29 -- common/autotest_common.sh@10 -- # set +x 00:23:33.223 22:23:29 -- host/mdns_discovery.sh@76 -- # xargs 00:23:33.223 22:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@64 -- # sort 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@64 -- # xargs 00:23:33.487 22:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.487 22:23:29 -- common/autotest_common.sh@10 -- # set +x 00:23:33.487 22:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:33.487 22:23:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.487 22:23:29 -- common/autotest_common.sh@10 -- # set +x 00:23:33.487 22:23:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@197 -- # kill 87755 00:23:33.487 22:23:29 -- host/mdns_discovery.sh@200 -- # wait 87755 00:23:33.487 [2024-11-17 22:23:29.978087] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:33.487 22:23:30 -- host/mdns_discovery.sh@201 -- # kill 87842 00:23:33.487 Got SIGTERM, quitting. 00:23:33.487 22:23:30 -- host/mdns_discovery.sh@202 -- # kill 87785 00:23:33.487 22:23:30 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:33.487 22:23:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:33.487 22:23:30 -- nvmf/common.sh@116 -- # sync 00:23:33.487 Got SIGTERM, quitting. 00:23:33.487 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:33.487 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:33.487 avahi-daemon 0.8 exiting. 00:23:33.746 22:23:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:33.746 22:23:30 -- nvmf/common.sh@119 -- # set +e 00:23:33.746 22:23:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:33.746 22:23:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:33.746 rmmod nvme_tcp 00:23:33.746 rmmod nvme_fabrics 00:23:33.746 rmmod nvme_keyring 00:23:33.746 22:23:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:33.746 22:23:30 -- nvmf/common.sh@123 -- # set -e 00:23:33.746 22:23:30 -- nvmf/common.sh@124 -- # return 0 00:23:33.746 22:23:30 -- nvmf/common.sh@477 -- # '[' -n 87705 ']' 00:23:33.746 22:23:30 -- nvmf/common.sh@478 -- # killprocess 87705 00:23:33.746 22:23:30 -- common/autotest_common.sh@936 -- # '[' -z 87705 ']' 00:23:33.746 22:23:30 -- common/autotest_common.sh@940 -- # kill -0 87705 00:23:33.746 22:23:30 -- common/autotest_common.sh@941 -- # uname 00:23:33.746 22:23:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:33.746 22:23:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87705 00:23:33.746 22:23:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:33.746 22:23:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:33.747 killing process with pid 87705 00:23:33.747 22:23:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87705' 00:23:33.747 22:23:30 -- common/autotest_common.sh@955 -- # kill 87705 00:23:33.747 22:23:30 -- common/autotest_common.sh@960 -- # wait 87705 00:23:34.005 22:23:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:34.005 22:23:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:34.005 22:23:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:34.005 22:23:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.005 22:23:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:34.005 22:23:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.005 22:23:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.005 22:23:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.005 22:23:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:34.005 00:23:34.005 real 0m20.712s 00:23:34.005 user 0m40.341s 00:23:34.005 sys 0m1.936s 00:23:34.005 22:23:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:34.005 22:23:30 -- common/autotest_common.sh@10 -- # set +x 00:23:34.005 ************************************ 00:23:34.005 END TEST nvmf_mdns_discovery 00:23:34.005 ************************************ 00:23:34.005 22:23:30 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:34.006 22:23:30 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:34.006 22:23:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:34.006 22:23:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:34.006 22:23:30 -- common/autotest_common.sh@10 -- # set +x 00:23:34.265 ************************************ 00:23:34.265 START TEST nvmf_multipath 00:23:34.265 ************************************ 00:23:34.265 22:23:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:34.265 * Looking for test storage... 00:23:34.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:34.265 22:23:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:34.265 22:23:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:34.265 22:23:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:34.265 22:23:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:34.265 22:23:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:34.265 22:23:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:34.265 22:23:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:34.265 22:23:30 -- scripts/common.sh@335 -- # IFS=.-: 00:23:34.265 22:23:30 -- scripts/common.sh@335 -- # read -ra ver1 00:23:34.265 22:23:30 -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.265 22:23:30 -- scripts/common.sh@336 -- # read -ra ver2 00:23:34.265 22:23:30 -- scripts/common.sh@337 -- # local 'op=<' 00:23:34.265 22:23:30 -- scripts/common.sh@339 -- # ver1_l=2 00:23:34.265 22:23:30 -- scripts/common.sh@340 -- # ver2_l=1 00:23:34.265 22:23:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:34.265 22:23:30 -- scripts/common.sh@343 -- # case "$op" in 00:23:34.265 22:23:30 -- scripts/common.sh@344 -- # : 1 00:23:34.265 22:23:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:34.265 22:23:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.265 22:23:30 -- scripts/common.sh@364 -- # decimal 1 00:23:34.265 22:23:30 -- scripts/common.sh@352 -- # local d=1 00:23:34.265 22:23:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.265 22:23:30 -- scripts/common.sh@354 -- # echo 1 00:23:34.265 22:23:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:34.265 22:23:30 -- scripts/common.sh@365 -- # decimal 2 00:23:34.265 22:23:30 -- scripts/common.sh@352 -- # local d=2 00:23:34.265 22:23:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.265 22:23:30 -- scripts/common.sh@354 -- # echo 2 00:23:34.265 22:23:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:34.265 22:23:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:34.265 22:23:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:34.265 22:23:30 -- scripts/common.sh@367 -- # return 0 00:23:34.265 22:23:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.265 22:23:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:34.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.265 --rc genhtml_branch_coverage=1 00:23:34.265 --rc genhtml_function_coverage=1 00:23:34.265 --rc genhtml_legend=1 00:23:34.265 --rc geninfo_all_blocks=1 00:23:34.265 --rc geninfo_unexecuted_blocks=1 00:23:34.265 00:23:34.265 ' 00:23:34.265 22:23:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:34.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.265 --rc genhtml_branch_coverage=1 00:23:34.265 --rc genhtml_function_coverage=1 00:23:34.265 --rc genhtml_legend=1 00:23:34.265 --rc geninfo_all_blocks=1 00:23:34.265 --rc geninfo_unexecuted_blocks=1 00:23:34.265 00:23:34.265 ' 00:23:34.265 22:23:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:34.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.265 --rc genhtml_branch_coverage=1 00:23:34.265 --rc genhtml_function_coverage=1 00:23:34.265 --rc genhtml_legend=1 00:23:34.265 --rc geninfo_all_blocks=1 00:23:34.265 --rc geninfo_unexecuted_blocks=1 00:23:34.265 00:23:34.265 ' 00:23:34.265 22:23:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:34.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.265 --rc genhtml_branch_coverage=1 00:23:34.265 --rc genhtml_function_coverage=1 00:23:34.265 --rc genhtml_legend=1 00:23:34.265 --rc geninfo_all_blocks=1 00:23:34.265 --rc geninfo_unexecuted_blocks=1 00:23:34.265 00:23:34.265 ' 00:23:34.265 22:23:30 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:34.265 22:23:30 -- nvmf/common.sh@7 -- # uname -s 00:23:34.265 22:23:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.265 22:23:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.265 22:23:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.265 22:23:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.265 22:23:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.265 22:23:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.265 22:23:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.265 22:23:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.265 22:23:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.265 22:23:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.265 22:23:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:23:34.265 22:23:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:23:34.265 22:23:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.265 22:23:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.265 22:23:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:34.265 22:23:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:34.265 22:23:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.265 22:23:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.265 22:23:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.265 22:23:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.265 22:23:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.265 22:23:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.265 22:23:30 -- paths/export.sh@5 -- # export PATH 00:23:34.265 22:23:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.265 22:23:30 -- nvmf/common.sh@46 -- # : 0 00:23:34.265 22:23:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:34.265 22:23:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:34.265 22:23:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:34.265 22:23:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.265 22:23:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.265 22:23:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:34.265 22:23:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:34.265 22:23:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:34.265 22:23:30 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:34.265 22:23:30 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:34.266 22:23:30 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:34.266 22:23:30 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:34.266 22:23:30 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.266 22:23:30 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:34.266 22:23:30 -- host/multipath.sh@30 -- # nvmftestinit 00:23:34.266 22:23:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:34.266 22:23:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.266 22:23:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:34.266 22:23:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:34.266 22:23:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:34.266 22:23:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.266 22:23:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.266 22:23:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.266 22:23:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:34.266 22:23:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:34.266 22:23:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:34.266 22:23:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:34.266 22:23:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:34.266 22:23:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:34.266 22:23:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.266 22:23:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.266 22:23:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:34.266 22:23:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:34.266 22:23:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:34.266 22:23:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:34.266 22:23:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:34.266 22:23:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.266 22:23:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:34.266 22:23:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:34.266 22:23:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:34.266 22:23:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:34.266 22:23:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:34.266 22:23:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:34.266 Cannot find device "nvmf_tgt_br" 00:23:34.266 22:23:30 -- nvmf/common.sh@154 -- # true 00:23:34.266 22:23:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:34.525 Cannot find device "nvmf_tgt_br2" 00:23:34.525 22:23:30 -- nvmf/common.sh@155 -- # true 00:23:34.525 22:23:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:34.525 22:23:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:34.525 Cannot find device "nvmf_tgt_br" 00:23:34.525 22:23:30 -- nvmf/common.sh@157 -- # true 00:23:34.525 22:23:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:34.525 Cannot find device "nvmf_tgt_br2" 00:23:34.525 22:23:30 -- nvmf/common.sh@158 -- # true 00:23:34.525 22:23:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:34.525 22:23:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:34.525 22:23:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:34.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.525 22:23:30 -- nvmf/common.sh@161 -- # true 00:23:34.525 22:23:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:34.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.525 22:23:30 -- nvmf/common.sh@162 -- # true 00:23:34.525 22:23:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:34.525 22:23:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:34.525 22:23:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:34.525 22:23:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:34.525 22:23:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:34.525 22:23:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:34.525 22:23:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:34.525 22:23:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:34.525 22:23:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:34.525 22:23:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:34.525 22:23:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:34.525 22:23:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:34.525 22:23:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:34.525 22:23:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:34.525 22:23:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:34.525 22:23:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:34.525 22:23:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:34.525 22:23:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:34.525 22:23:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:34.525 22:23:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:34.525 22:23:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:34.783 22:23:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:34.784 22:23:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:34.784 22:23:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:34.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:23:34.784 00:23:34.784 --- 10.0.0.2 ping statistics --- 00:23:34.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.784 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:34.784 22:23:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:34.784 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:34.784 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:34.784 00:23:34.784 --- 10.0.0.3 ping statistics --- 00:23:34.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.784 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:34.784 22:23:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:34.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:34.784 00:23:34.784 --- 10.0.0.1 ping statistics --- 00:23:34.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.784 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:34.784 22:23:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.784 22:23:31 -- nvmf/common.sh@421 -- # return 0 00:23:34.784 22:23:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:34.784 22:23:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.784 22:23:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:34.784 22:23:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:34.784 22:23:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.784 22:23:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:34.784 22:23:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:34.784 22:23:31 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:34.784 22:23:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:34.784 22:23:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.784 22:23:31 -- common/autotest_common.sh@10 -- # set +x 00:23:34.784 22:23:31 -- nvmf/common.sh@469 -- # nvmfpid=88359 00:23:34.784 22:23:31 -- nvmf/common.sh@470 -- # waitforlisten 88359 00:23:34.784 22:23:31 -- common/autotest_common.sh@829 -- # '[' -z 88359 ']' 00:23:34.784 22:23:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:34.784 22:23:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.784 22:23:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.784 22:23:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.784 22:23:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.784 22:23:31 -- common/autotest_common.sh@10 -- # set +x 00:23:34.784 [2024-11-17 22:23:31.231224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:34.784 [2024-11-17 22:23:31.231302] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.784 [2024-11-17 22:23:31.368484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:35.042 [2024-11-17 22:23:31.473382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:35.042 [2024-11-17 22:23:31.473848] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.042 [2024-11-17 22:23:31.474012] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.042 [2024-11-17 22:23:31.474149] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.042 [2024-11-17 22:23:31.474368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.042 [2024-11-17 22:23:31.474386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.609 22:23:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.609 22:23:32 -- common/autotest_common.sh@862 -- # return 0 00:23:35.610 22:23:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:35.610 22:23:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.610 22:23:32 -- common/autotest_common.sh@10 -- # set +x 00:23:35.867 22:23:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.867 22:23:32 -- host/multipath.sh@33 -- # nvmfapp_pid=88359 00:23:35.867 22:23:32 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:35.867 [2024-11-17 22:23:32.437549] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.867 22:23:32 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:36.434 Malloc0 00:23:36.434 22:23:32 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:36.434 22:23:32 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.693 22:23:33 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.952 [2024-11-17 22:23:33.431717] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.952 22:23:33 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:37.211 [2024-11-17 22:23:33.623884] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:37.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.211 22:23:33 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:37.211 22:23:33 -- host/multipath.sh@44 -- # bdevperf_pid=88461 00:23:37.211 22:23:33 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.211 22:23:33 -- host/multipath.sh@47 -- # waitforlisten 88461 /var/tmp/bdevperf.sock 00:23:37.211 22:23:33 -- common/autotest_common.sh@829 -- # '[' -z 88461 ']' 00:23:37.211 22:23:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.211 22:23:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.211 22:23:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.211 22:23:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.211 22:23:33 -- common/autotest_common.sh@10 -- # set +x 00:23:38.148 22:23:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.148 22:23:34 -- common/autotest_common.sh@862 -- # return 0 00:23:38.148 22:23:34 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:38.407 22:23:34 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:38.974 Nvme0n1 00:23:38.974 22:23:35 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:39.232 Nvme0n1 00:23:39.232 22:23:35 -- host/multipath.sh@78 -- # sleep 1 00:23:39.232 22:23:35 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:40.169 22:23:36 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:40.169 22:23:36 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.428 22:23:36 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:40.687 22:23:37 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:40.687 22:23:37 -- host/multipath.sh@65 -- # dtrace_pid=88548 00:23:40.687 22:23:37 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88359 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:40.687 22:23:37 -- host/multipath.sh@66 -- # sleep 6 00:23:47.253 22:23:43 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:47.253 22:23:43 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:47.253 22:23:43 -- host/multipath.sh@67 -- # active_port=4421 00:23:47.253 22:23:43 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:47.253 Attaching 4 probes... 00:23:47.253 @path[10.0.0.2, 4421]: 20290 00:23:47.253 @path[10.0.0.2, 4421]: 20473 00:23:47.253 @path[10.0.0.2, 4421]: 22350 00:23:47.253 @path[10.0.0.2, 4421]: 23275 00:23:47.253 @path[10.0.0.2, 4421]: 23348 00:23:47.253 22:23:43 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:47.253 22:23:43 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:47.253 22:23:43 -- host/multipath.sh@69 -- # sed -n 1p 00:23:47.253 22:23:43 -- host/multipath.sh@69 -- # port=4421 00:23:47.253 22:23:43 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:47.253 22:23:43 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:47.253 22:23:43 -- host/multipath.sh@72 -- # kill 88548 00:23:47.253 22:23:43 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:47.253 22:23:43 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:47.253 22:23:43 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:47.253 22:23:43 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:47.512 22:23:44 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:47.512 22:23:44 -- host/multipath.sh@65 -- # dtrace_pid=88686 00:23:47.512 22:23:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88359 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:47.512 22:23:44 -- host/multipath.sh@66 -- # sleep 6 00:23:54.098 22:23:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:54.098 22:23:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:54.098 22:23:50 -- host/multipath.sh@67 -- # active_port=4420 00:23:54.098 22:23:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.098 Attaching 4 probes... 00:23:54.098 @path[10.0.0.2, 4420]: 21238 00:23:54.098 @path[10.0.0.2, 4420]: 21666 00:23:54.098 @path[10.0.0.2, 4420]: 21754 00:23:54.098 @path[10.0.0.2, 4420]: 21631 00:23:54.098 @path[10.0.0.2, 4420]: 21626 00:23:54.098 22:23:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:54.098 22:23:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:54.098 22:23:50 -- host/multipath.sh@69 -- # sed -n 1p 00:23:54.098 22:23:50 -- host/multipath.sh@69 -- # port=4420 00:23:54.098 22:23:50 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.098 22:23:50 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.098 22:23:50 -- host/multipath.sh@72 -- # kill 88686 00:23:54.098 22:23:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.098 22:23:50 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:54.098 22:23:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:54.098 22:23:50 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:54.357 22:23:50 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:54.357 22:23:50 -- host/multipath.sh@65 -- # dtrace_pid=88818 00:23:54.357 22:23:50 -- host/multipath.sh@66 -- # sleep 6 00:23:54.357 22:23:50 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88359 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:00.923 22:23:56 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:00.923 22:23:56 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:00.923 22:23:57 -- host/multipath.sh@67 -- # active_port=4421 00:24:00.923 22:23:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.923 Attaching 4 probes... 00:24:00.923 @path[10.0.0.2, 4421]: 14924 00:24:00.923 @path[10.0.0.2, 4421]: 20903 00:24:00.923 @path[10.0.0.2, 4421]: 20955 00:24:00.923 @path[10.0.0.2, 4421]: 20928 00:24:00.923 @path[10.0.0.2, 4421]: 21004 00:24:00.923 22:23:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:00.923 22:23:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:00.923 22:23:57 -- host/multipath.sh@69 -- # sed -n 1p 00:24:00.923 22:23:57 -- host/multipath.sh@69 -- # port=4421 00:24:00.923 22:23:57 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:00.923 22:23:57 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:00.923 22:23:57 -- host/multipath.sh@72 -- # kill 88818 00:24:00.923 22:23:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.923 22:23:57 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:00.923 22:23:57 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:00.923 22:23:57 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:01.182 22:23:57 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:01.182 22:23:57 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88359 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:01.182 22:23:57 -- host/multipath.sh@65 -- # dtrace_pid=88947 00:24:01.182 22:23:57 -- host/multipath.sh@66 -- # sleep 6 00:24:07.747 22:24:03 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:07.747 22:24:03 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:07.747 22:24:03 -- host/multipath.sh@67 -- # active_port= 00:24:07.747 22:24:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.747 Attaching 4 probes... 00:24:07.747 00:24:07.747 00:24:07.747 00:24:07.747 00:24:07.747 00:24:07.747 22:24:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:07.747 22:24:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:07.747 22:24:03 -- host/multipath.sh@69 -- # sed -n 1p 00:24:07.747 22:24:03 -- host/multipath.sh@69 -- # port= 00:24:07.747 22:24:03 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:07.747 22:24:03 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:07.747 22:24:03 -- host/multipath.sh@72 -- # kill 88947 00:24:07.747 22:24:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.747 22:24:03 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:07.747 22:24:03 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.747 22:24:04 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:08.006 22:24:04 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:08.006 22:24:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88359 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:08.006 22:24:04 -- host/multipath.sh@65 -- # dtrace_pid=89079 00:24:08.006 22:24:04 -- host/multipath.sh@66 -- # sleep 6 00:24:14.572 22:24:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:14.572 22:24:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:14.572 22:24:10 -- host/multipath.sh@67 -- # active_port=4421 00:24:14.572 22:24:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:14.572 Attaching 4 probes... 00:24:14.572 @path[10.0.0.2, 4421]: 19461 00:24:14.572 @path[10.0.0.2, 4421]: 21954 00:24:14.572 @path[10.0.0.2, 4421]: 22070 00:24:14.572 @path[10.0.0.2, 4421]: 22002 00:24:14.572 @path[10.0.0.2, 4421]: 22244 00:24:14.572 22:24:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:14.572 22:24:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:14.572 22:24:10 -- host/multipath.sh@69 -- # sed -n 1p 00:24:14.572 22:24:10 -- host/multipath.sh@69 -- # port=4421 00:24:14.572 22:24:10 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:14.572 22:24:10 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:14.572 22:24:10 -- host/multipath.sh@72 -- # kill 89079 00:24:14.572 22:24:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:14.572 22:24:10 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:14.573 [2024-11-17 22:24:10.953321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953376] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 [2024-11-17 22:24:10.953688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4800 is same with the state(5) to be set 00:24:14.573 22:24:10 -- host/multipath.sh@101 -- # sleep 1 00:24:15.509 22:24:11 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:15.509 22:24:11 -- host/multipath.sh@65 -- # dtrace_pid=89209 00:24:15.509 22:24:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88359 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:15.509 22:24:11 -- host/multipath.sh@66 -- # sleep 6 00:24:22.085 22:24:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:22.085 22:24:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:22.085 22:24:18 -- host/multipath.sh@67 -- # active_port=4420 00:24:22.085 22:24:18 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:22.085 Attaching 4 probes... 00:24:22.085 @path[10.0.0.2, 4420]: 21682 00:24:22.085 @path[10.0.0.2, 4420]: 22075 00:24:22.085 @path[10.0.0.2, 4420]: 22222 00:24:22.085 @path[10.0.0.2, 4420]: 22367 00:24:22.085 @path[10.0.0.2, 4420]: 22359 00:24:22.085 22:24:18 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:22.085 22:24:18 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:22.085 22:24:18 -- host/multipath.sh@69 -- # sed -n 1p 00:24:22.085 22:24:18 -- host/multipath.sh@69 -- # port=4420 00:24:22.085 22:24:18 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:22.085 22:24:18 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:22.085 22:24:18 -- host/multipath.sh@72 -- # kill 89209 00:24:22.085 22:24:18 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:22.085 22:24:18 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:22.085 [2024-11-17 22:24:18.520680] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.085 22:24:18 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:22.343 22:24:18 -- host/multipath.sh@111 -- # sleep 6 00:24:28.908 22:24:24 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:28.908 22:24:24 -- host/multipath.sh@65 -- # dtrace_pid=89407 00:24:28.908 22:24:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88359 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:28.908 22:24:24 -- host/multipath.sh@66 -- # sleep 6 00:24:34.177 22:24:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:34.177 22:24:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:34.436 22:24:31 -- host/multipath.sh@67 -- # active_port=4421 00:24:34.436 22:24:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:34.436 Attaching 4 probes... 00:24:34.436 @path[10.0.0.2, 4421]: 21552 00:24:34.436 @path[10.0.0.2, 4421]: 21905 00:24:34.436 @path[10.0.0.2, 4421]: 22005 00:24:34.436 @path[10.0.0.2, 4421]: 22290 00:24:34.436 @path[10.0.0.2, 4421]: 21795 00:24:34.436 22:24:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:34.436 22:24:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:34.436 22:24:31 -- host/multipath.sh@69 -- # sed -n 1p 00:24:34.436 22:24:31 -- host/multipath.sh@69 -- # port=4421 00:24:34.436 22:24:31 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:34.436 22:24:31 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:34.436 22:24:31 -- host/multipath.sh@72 -- # kill 89407 00:24:34.436 22:24:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:34.436 22:24:31 -- host/multipath.sh@114 -- # killprocess 88461 00:24:34.436 22:24:31 -- common/autotest_common.sh@936 -- # '[' -z 88461 ']' 00:24:34.436 22:24:31 -- common/autotest_common.sh@940 -- # kill -0 88461 00:24:34.436 22:24:31 -- common/autotest_common.sh@941 -- # uname 00:24:34.695 22:24:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:34.695 22:24:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88461 00:24:34.695 killing process with pid 88461 00:24:34.695 22:24:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:34.695 22:24:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:34.695 22:24:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88461' 00:24:34.695 22:24:31 -- common/autotest_common.sh@955 -- # kill 88461 00:24:34.695 22:24:31 -- common/autotest_common.sh@960 -- # wait 88461 00:24:34.695 Connection closed with partial response: 00:24:34.695 00:24:34.695 00:24:34.962 22:24:31 -- host/multipath.sh@116 -- # wait 88461 00:24:34.962 22:24:31 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:34.962 [2024-11-17 22:23:33.680941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:34.962 [2024-11-17 22:23:33.681046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88461 ] 00:24:34.962 [2024-11-17 22:23:33.817138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.962 [2024-11-17 22:23:33.926250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.962 Running I/O for 90 seconds... 00:24:34.962 [2024-11-17 22:23:44.040587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.962 [2024-11-17 22:23:44.040657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.040703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.962 [2024-11-17 22:23:44.040722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.040762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.962 [2024-11-17 22:23:44.040787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.040805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.962 [2024-11-17 22:23:44.040819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.040838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.040850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.040869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.962 [2024-11-17 22:23:44.040882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.040900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.040915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.040933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.962 [2024-11-17 22:23:44.040946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.040964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.040978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.040997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.041010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.041060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.962 [2024-11-17 22:23:44.041095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.962 [2024-11-17 22:23:44.041131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.041161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.041202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.041234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.962 [2024-11-17 22:23:44.041731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.041787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.041823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.041858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.962 [2024-11-17 22:23:44.041892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.962 [2024-11-17 22:23:44.041912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.041926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.041945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.041959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.041991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.963 [2024-11-17 22:23:44.042085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.963 [2024-11-17 22:23:44.042118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.963 [2024-11-17 22:23:44.042349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.963 [2024-11-17 22:23:44.042946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.042978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.042997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.043011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.043029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.043042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.043062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.963 [2024-11-17 22:23:44.043076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.043094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.043119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.043138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.043151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.043175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.043188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.043207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.963 [2024-11-17 22:23:44.043220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.963 [2024-11-17 22:23:44.043239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.963 [2024-11-17 22:23:44.043252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.043326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.043835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.043848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.044464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.044503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.044536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.044567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.044599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.044632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.044663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.044695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.044761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.044808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.044839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.044871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.044904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.044935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.044966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.044984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.044997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.045016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.045029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.045049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.045063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.045081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.045095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.045114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.964 [2024-11-17 22:23:44.045128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.964 [2024-11-17 22:23:44.045157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.964 [2024-11-17 22:23:44.045195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.045959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.045977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.045991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.046046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.046061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.046081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.046094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.046113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:44.046126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.046145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.046159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.046179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.046192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.046211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.046225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:44.046243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:44.046257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:50.617265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.965 [2024-11-17 22:23:50.617337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:50.617397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:50.617415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:50.617435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:50.617448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:50.617465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:50.617478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:50.617495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:50.617508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:50.617549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:50.617563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:50.617580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.965 [2024-11-17 22:23:50.617593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.965 [2024-11-17 22:23:50.617610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.617622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.617651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.617681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.617711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.617763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.617795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.617825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.617856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.617885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.617914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.617954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.617971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.617983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.618052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.618084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.618115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.618146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.618180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.618753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.618808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.618842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.618875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.618907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.618949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.618971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.966 [2024-11-17 22:23:50.618984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.619004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.619017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.619037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.619049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.619069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.619081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.619101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.619113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.619133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.619147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.619166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.619179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.619199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.619211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.619231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.966 [2024-11-17 22:23:50.619243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.966 [2024-11-17 22:23:50.619263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.619276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.619309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.619351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.619386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.619419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.619451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.619484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.619516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.619549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.619582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.619706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.619761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.619799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.619849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.619886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.619933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.619969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.619992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.967 [2024-11-17 22:23:50.620695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.967 [2024-11-17 22:23:50.620751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.967 [2024-11-17 22:23:50.620773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.620798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.620818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.620841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.620854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.620875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.620887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.620908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.620922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.620943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.620956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.620977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.620989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:50.621099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:50.621136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:50.621170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:50.621204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:50.621246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:50.621282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:50.621490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:50.621558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:50.621653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:50.621666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:57.631411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:57.631477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:57.631511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:57.631542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:57.631573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:57.631605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:57.631637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.968 [2024-11-17 22:23:57.631669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:57.631701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:57.631733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:57.631784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:57.631815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.968 [2024-11-17 22:23:57.631867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.968 [2024-11-17 22:23:57.631885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.631899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.631916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.631928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.631945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.631958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.631976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.631989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.632558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.632692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.632711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.632726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.633154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.633191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.633225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.633259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.633293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.633326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.633361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.633395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.633428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.633461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.633495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.969 [2024-11-17 22:23:57.633529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.633569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.633604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.633638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-11-17 22:23:57.633672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.969 [2024-11-17 22:23:57.633693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.633705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.633725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.633756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.633779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.633793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.633814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.633827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.633961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.633983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.634054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.634132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.634493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.634529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.634567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.634641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.634685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.634967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.634990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.635003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.635025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.635038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.635062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.635075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.635098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.635113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.635136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.635157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.635180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.635194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.635216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.635230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.635252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.970 [2024-11-17 22:23:57.635266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.635288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.635302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.970 [2024-11-17 22:23:57.635325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-11-17 22:23:57.635338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.971 [2024-11-17 22:23:57.635531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.971 [2024-11-17 22:23:57.635610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.971 [2024-11-17 22:23:57.635649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.635963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:23:57.635986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:23:57.636000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.971 [2024-11-17 22:24:10.954208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.971 [2024-11-17 22:24:10.954267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.971 [2024-11-17 22:24:10.954530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.971 [2024-11-17 22:24:10.954551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.954622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.954645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.954692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.954978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.954992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.972 [2024-11-17 22:24:10.955532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-11-17 22:24:10.955680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-11-17 22:24:10.955693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.955703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.955727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.955763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.955789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.955812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.955850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.955875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.955899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.955930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.955965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.955978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.955990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.956062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.956150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.956172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.956269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.956483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.956528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-11-17 22:24:10.956551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.973 [2024-11-17 22:24:10.956684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.973 [2024-11-17 22:24:10.956695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.956724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.956764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.956812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.956836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.956864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.956887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.956922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.956953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.956977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.956989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.957049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.957074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.957097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.957236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.957263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.957287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-11-17 22:24:10.957309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.974 [2024-11-17 22:24:10.957522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223e5b0 is same with the state(5) to be set 00:24:34.974 [2024-11-17 22:24:10.957548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.974 [2024-11-17 22:24:10.957557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.974 [2024-11-17 22:24:10.957576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95808 len:8 PRP1 0x0 PRP2 0x0 00:24:34.974 [2024-11-17 22:24:10.957588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.974 [2024-11-17 22:24:10.957654] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x223e5b0 was disconnected and freed. reset controller. 00:24:34.974 [2024-11-17 22:24:10.958926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.974 [2024-11-17 22:24:10.959007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e2790 (9): Bad file descriptor 00:24:34.974 [2024-11-17 22:24:10.959159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.974 [2024-11-17 22:24:10.959209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.974 [2024-11-17 22:24:10.959228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e2790 with addr=10.0.0.2, port=4421 00:24:34.974 [2024-11-17 22:24:10.959241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2790 is same with the state(5) to be set 00:24:34.974 [2024-11-17 22:24:10.959349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e2790 (9): Bad file descriptor 00:24:34.974 [2024-11-17 22:24:10.959511] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.974 [2024-11-17 22:24:10.959533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.974 [2024-11-17 22:24:10.959547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.974 [2024-11-17 22:24:10.959647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.974 [2024-11-17 22:24:10.959669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.974 [2024-11-17 22:24:21.009382] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.974 Received shutdown signal, test time was about 55.297980 seconds 00:24:34.974 00:24:34.974 Latency(us) 00:24:34.975 [2024-11-17T22:24:31.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.975 [2024-11-17T22:24:31.590Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.975 Verification LBA range: start 0x0 length 0x4000 00:24:34.975 Nvme0n1 : 55.30 12369.05 48.32 0.00 0.00 10332.93 707.49 7015926.69 00:24:34.975 [2024-11-17T22:24:31.590Z] =================================================================================================================== 00:24:34.975 [2024-11-17T22:24:31.590Z] Total : 12369.05 48.32 0.00 0.00 10332.93 707.49 7015926.69 00:24:34.975 22:24:31 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.234 22:24:31 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:35.234 22:24:31 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:35.234 22:24:31 -- host/multipath.sh@125 -- # nvmftestfini 00:24:35.234 22:24:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:35.234 22:24:31 -- nvmf/common.sh@116 -- # sync 00:24:35.234 22:24:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:35.234 22:24:31 -- nvmf/common.sh@119 -- # set +e 00:24:35.234 22:24:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:35.234 22:24:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:35.234 rmmod nvme_tcp 00:24:35.234 rmmod nvme_fabrics 00:24:35.234 rmmod nvme_keyring 00:24:35.234 22:24:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:35.234 22:24:31 -- nvmf/common.sh@123 -- # set -e 00:24:35.234 22:24:31 -- nvmf/common.sh@124 -- # return 0 00:24:35.234 22:24:31 -- nvmf/common.sh@477 -- # '[' -n 88359 ']' 00:24:35.234 22:24:31 -- nvmf/common.sh@478 -- # killprocess 88359 00:24:35.234 22:24:31 -- common/autotest_common.sh@936 -- # '[' -z 88359 ']' 00:24:35.234 22:24:31 -- common/autotest_common.sh@940 -- # kill -0 88359 00:24:35.234 22:24:31 -- common/autotest_common.sh@941 -- # uname 00:24:35.234 22:24:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:35.234 22:24:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88359 00:24:35.234 22:24:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:35.234 22:24:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:35.234 killing process with pid 88359 00:24:35.234 22:24:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88359' 00:24:35.234 22:24:31 -- common/autotest_common.sh@955 -- # kill 88359 00:24:35.234 22:24:31 -- common/autotest_common.sh@960 -- # wait 88359 00:24:35.493 22:24:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:35.493 22:24:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:35.493 22:24:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:35.493 22:24:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.493 22:24:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:35.493 22:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.493 22:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.493 22:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.493 22:24:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:35.493 00:24:35.493 real 1m1.428s 00:24:35.493 user 2m51.690s 00:24:35.493 sys 0m14.866s 00:24:35.493 22:24:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:35.493 ************************************ 00:24:35.493 22:24:32 -- common/autotest_common.sh@10 -- # set +x 00:24:35.493 END TEST nvmf_multipath 00:24:35.493 ************************************ 00:24:35.493 22:24:32 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:35.493 22:24:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:35.493 22:24:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:35.493 22:24:32 -- common/autotest_common.sh@10 -- # set +x 00:24:35.493 ************************************ 00:24:35.493 START TEST nvmf_timeout 00:24:35.493 ************************************ 00:24:35.493 22:24:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:35.753 * Looking for test storage... 00:24:35.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:35.753 22:24:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:35.753 22:24:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:35.753 22:24:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:35.753 22:24:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:35.753 22:24:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:35.753 22:24:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:35.753 22:24:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:35.753 22:24:32 -- scripts/common.sh@335 -- # IFS=.-: 00:24:35.753 22:24:32 -- scripts/common.sh@335 -- # read -ra ver1 00:24:35.753 22:24:32 -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.753 22:24:32 -- scripts/common.sh@336 -- # read -ra ver2 00:24:35.753 22:24:32 -- scripts/common.sh@337 -- # local 'op=<' 00:24:35.753 22:24:32 -- scripts/common.sh@339 -- # ver1_l=2 00:24:35.753 22:24:32 -- scripts/common.sh@340 -- # ver2_l=1 00:24:35.753 22:24:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:35.753 22:24:32 -- scripts/common.sh@343 -- # case "$op" in 00:24:35.753 22:24:32 -- scripts/common.sh@344 -- # : 1 00:24:35.753 22:24:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:35.753 22:24:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.753 22:24:32 -- scripts/common.sh@364 -- # decimal 1 00:24:35.753 22:24:32 -- scripts/common.sh@352 -- # local d=1 00:24:35.753 22:24:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.753 22:24:32 -- scripts/common.sh@354 -- # echo 1 00:24:35.753 22:24:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:35.753 22:24:32 -- scripts/common.sh@365 -- # decimal 2 00:24:35.753 22:24:32 -- scripts/common.sh@352 -- # local d=2 00:24:35.753 22:24:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.753 22:24:32 -- scripts/common.sh@354 -- # echo 2 00:24:35.753 22:24:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:35.753 22:24:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:35.753 22:24:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:35.753 22:24:32 -- scripts/common.sh@367 -- # return 0 00:24:35.753 22:24:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.753 22:24:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:35.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.753 --rc genhtml_branch_coverage=1 00:24:35.753 --rc genhtml_function_coverage=1 00:24:35.753 --rc genhtml_legend=1 00:24:35.753 --rc geninfo_all_blocks=1 00:24:35.753 --rc geninfo_unexecuted_blocks=1 00:24:35.753 00:24:35.753 ' 00:24:35.753 22:24:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:35.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.753 --rc genhtml_branch_coverage=1 00:24:35.753 --rc genhtml_function_coverage=1 00:24:35.753 --rc genhtml_legend=1 00:24:35.753 --rc geninfo_all_blocks=1 00:24:35.753 --rc geninfo_unexecuted_blocks=1 00:24:35.753 00:24:35.753 ' 00:24:35.753 22:24:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:35.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.753 --rc genhtml_branch_coverage=1 00:24:35.753 --rc genhtml_function_coverage=1 00:24:35.753 --rc genhtml_legend=1 00:24:35.753 --rc geninfo_all_blocks=1 00:24:35.753 --rc geninfo_unexecuted_blocks=1 00:24:35.753 00:24:35.753 ' 00:24:35.753 22:24:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:35.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.753 --rc genhtml_branch_coverage=1 00:24:35.753 --rc genhtml_function_coverage=1 00:24:35.753 --rc genhtml_legend=1 00:24:35.753 --rc geninfo_all_blocks=1 00:24:35.753 --rc geninfo_unexecuted_blocks=1 00:24:35.753 00:24:35.753 ' 00:24:35.753 22:24:32 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:35.753 22:24:32 -- nvmf/common.sh@7 -- # uname -s 00:24:35.753 22:24:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.753 22:24:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.753 22:24:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.753 22:24:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.753 22:24:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.753 22:24:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.754 22:24:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.754 22:24:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.754 22:24:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.754 22:24:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.754 22:24:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:24:35.754 22:24:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:24:35.754 22:24:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.754 22:24:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.754 22:24:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:35.754 22:24:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:35.754 22:24:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.754 22:24:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.754 22:24:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.754 22:24:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.754 22:24:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.754 22:24:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.754 22:24:32 -- paths/export.sh@5 -- # export PATH 00:24:35.754 22:24:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.754 22:24:32 -- nvmf/common.sh@46 -- # : 0 00:24:35.754 22:24:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:35.754 22:24:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:35.754 22:24:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:35.754 22:24:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.754 22:24:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.754 22:24:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:35.754 22:24:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:35.754 22:24:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:35.754 22:24:32 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.754 22:24:32 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.754 22:24:32 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:35.754 22:24:32 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:35.754 22:24:32 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.754 22:24:32 -- host/timeout.sh@19 -- # nvmftestinit 00:24:35.754 22:24:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:35.754 22:24:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.754 22:24:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:35.754 22:24:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:35.754 22:24:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:35.754 22:24:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.754 22:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.754 22:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.754 22:24:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:35.754 22:24:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:35.754 22:24:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:35.754 22:24:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:35.754 22:24:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:35.754 22:24:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:35.754 22:24:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.754 22:24:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.754 22:24:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:35.754 22:24:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:35.754 22:24:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:35.754 22:24:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:35.754 22:24:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:35.754 22:24:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.754 22:24:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:35.754 22:24:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:35.754 22:24:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:35.754 22:24:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:35.754 22:24:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:35.754 22:24:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:35.754 Cannot find device "nvmf_tgt_br" 00:24:35.754 22:24:32 -- nvmf/common.sh@154 -- # true 00:24:35.754 22:24:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:35.754 Cannot find device "nvmf_tgt_br2" 00:24:35.754 22:24:32 -- nvmf/common.sh@155 -- # true 00:24:35.754 22:24:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:35.754 22:24:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:36.013 Cannot find device "nvmf_tgt_br" 00:24:36.014 22:24:32 -- nvmf/common.sh@157 -- # true 00:24:36.014 22:24:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:36.014 Cannot find device "nvmf_tgt_br2" 00:24:36.014 22:24:32 -- nvmf/common.sh@158 -- # true 00:24:36.014 22:24:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:36.014 22:24:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:36.014 22:24:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:36.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.014 22:24:32 -- nvmf/common.sh@161 -- # true 00:24:36.014 22:24:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:36.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.014 22:24:32 -- nvmf/common.sh@162 -- # true 00:24:36.014 22:24:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:36.014 22:24:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:36.014 22:24:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:36.014 22:24:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:36.014 22:24:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:36.014 22:24:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:36.014 22:24:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:36.014 22:24:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:36.014 22:24:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:36.014 22:24:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:36.014 22:24:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:36.014 22:24:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:36.014 22:24:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:36.014 22:24:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:36.014 22:24:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:36.014 22:24:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:36.014 22:24:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:36.014 22:24:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:36.014 22:24:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:36.014 22:24:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:36.014 22:24:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:36.014 22:24:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:36.014 22:24:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:36.014 22:24:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:36.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:24:36.014 00:24:36.014 --- 10.0.0.2 ping statistics --- 00:24:36.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.014 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:24:36.014 22:24:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:36.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:36.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:24:36.014 00:24:36.014 --- 10.0.0.3 ping statistics --- 00:24:36.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.014 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:36.014 22:24:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:36.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:24:36.273 00:24:36.273 --- 10.0.0.1 ping statistics --- 00:24:36.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.273 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:24:36.273 22:24:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.273 22:24:32 -- nvmf/common.sh@421 -- # return 0 00:24:36.273 22:24:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:36.273 22:24:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.273 22:24:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:36.273 22:24:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:36.273 22:24:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.273 22:24:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:36.273 22:24:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:36.273 22:24:32 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:36.273 22:24:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:36.273 22:24:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:36.273 22:24:32 -- common/autotest_common.sh@10 -- # set +x 00:24:36.273 22:24:32 -- nvmf/common.sh@469 -- # nvmfpid=89736 00:24:36.273 22:24:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:36.273 22:24:32 -- nvmf/common.sh@470 -- # waitforlisten 89736 00:24:36.273 22:24:32 -- common/autotest_common.sh@829 -- # '[' -z 89736 ']' 00:24:36.273 22:24:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.273 22:24:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.273 22:24:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.273 22:24:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.273 22:24:32 -- common/autotest_common.sh@10 -- # set +x 00:24:36.273 [2024-11-17 22:24:32.713197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:36.273 [2024-11-17 22:24:32.713281] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.273 [2024-11-17 22:24:32.851875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:36.531 [2024-11-17 22:24:32.934676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:36.531 [2024-11-17 22:24:32.935015] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.531 [2024-11-17 22:24:32.935103] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.532 [2024-11-17 22:24:32.935427] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.532 [2024-11-17 22:24:32.935583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.532 [2024-11-17 22:24:32.935590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.099 22:24:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.099 22:24:33 -- common/autotest_common.sh@862 -- # return 0 00:24:37.099 22:24:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:37.099 22:24:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.099 22:24:33 -- common/autotest_common.sh@10 -- # set +x 00:24:37.099 22:24:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.099 22:24:33 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.099 22:24:33 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:37.358 [2024-11-17 22:24:33.966221] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.616 22:24:33 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:37.616 Malloc0 00:24:37.874 22:24:34 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.133 22:24:34 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.392 22:24:34 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.392 [2024-11-17 22:24:34.956423] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.392 22:24:34 -- host/timeout.sh@32 -- # bdevperf_pid=89827 00:24:38.392 22:24:34 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:38.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.392 22:24:34 -- host/timeout.sh@34 -- # waitforlisten 89827 /var/tmp/bdevperf.sock 00:24:38.392 22:24:34 -- common/autotest_common.sh@829 -- # '[' -z 89827 ']' 00:24:38.392 22:24:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.392 22:24:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.392 22:24:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.392 22:24:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.392 22:24:34 -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 [2024-11-17 22:24:35.018680] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:38.651 [2024-11-17 22:24:35.018967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89827 ] 00:24:38.651 [2024-11-17 22:24:35.154260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.910 [2024-11-17 22:24:35.271045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.477 22:24:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.477 22:24:36 -- common/autotest_common.sh@862 -- # return 0 00:24:39.477 22:24:36 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:39.736 22:24:36 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:39.994 NVMe0n1 00:24:39.994 22:24:36 -- host/timeout.sh@51 -- # rpc_pid=89869 00:24:39.994 22:24:36 -- host/timeout.sh@53 -- # sleep 1 00:24:39.994 22:24:36 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.252 Running I/O for 10 seconds... 00:24:41.191 22:24:37 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.191 [2024-11-17 22:24:37.768933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.768985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.768997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.769006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.769014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.769022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.769030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.769037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.769044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.769053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.191 [2024-11-17 22:24:37.769060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769256] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769280] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2228a40 is same with the state(5) to be set 00:24:41.192 [2024-11-17 22:24:37.769868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.769909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.769937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.769949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.769960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.769969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.769979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.769996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.192 [2024-11-17 22:24:37.770214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.192 [2024-11-17 22:24:37.770224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.193 [2024-11-17 22:24:37.770875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.193 [2024-11-17 22:24:37.770932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.770951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.193 [2024-11-17 22:24:37.770969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.193 [2024-11-17 22:24:37.770987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.770998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.771006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.193 [2024-11-17 22:24:37.771017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.193 [2024-11-17 22:24:37.771025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.194 [2024-11-17 22:24:37.771635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.194 [2024-11-17 22:24:37.771799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.194 [2024-11-17 22:24:37.771810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.771819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.771829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.771837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.771847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.771856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.771867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.771875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.771886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.195 [2024-11-17 22:24:37.771895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.771906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.771914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.771925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.771934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.771944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.771952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.771962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.771971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.771981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.771990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.195 [2024-11-17 22:24:37.772186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.195 [2024-11-17 22:24:37.772206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.195 [2024-11-17 22:24:37.772263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.195 [2024-11-17 22:24:37.772300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.195 [2024-11-17 22:24:37.772336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.195 [2024-11-17 22:24:37.772467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a6050 is same with the state(5) to be set 00:24:41.195 [2024-11-17 22:24:37.772488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.195 [2024-11-17 22:24:37.772495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.195 [2024-11-17 22:24:37.772503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130840 len:8 PRP1 0x0 PRP2 0x0 00:24:41.195 [2024-11-17 22:24:37.772511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.195 [2024-11-17 22:24:37.772573] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6a6050 was disconnected and freed. reset controller. 00:24:41.195 [2024-11-17 22:24:37.772820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.195 [2024-11-17 22:24:37.772906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x630dc0 (9): Bad file descriptor 00:24:41.195 [2024-11-17 22:24:37.776851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x630dc0 (9): Bad file descriptor 00:24:41.195 [2024-11-17 22:24:37.776896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.195 [2024-11-17 22:24:37.776907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.195 [2024-11-17 22:24:37.776917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.196 [2024-11-17 22:24:37.776935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.196 [2024-11-17 22:24:37.776944] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.196 22:24:37 -- host/timeout.sh@56 -- # sleep 2 00:24:43.728 [2024-11-17 22:24:39.777022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.728 [2024-11-17 22:24:39.777107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.728 [2024-11-17 22:24:39.777125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x630dc0 with addr=10.0.0.2, port=4420 00:24:43.728 [2024-11-17 22:24:39.777137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x630dc0 is same with the state(5) to be set 00:24:43.728 [2024-11-17 22:24:39.777156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x630dc0 (9): Bad file descriptor 00:24:43.728 [2024-11-17 22:24:39.777173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.728 [2024-11-17 22:24:39.777182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.728 [2024-11-17 22:24:39.777191] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.728 [2024-11-17 22:24:39.777214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.728 [2024-11-17 22:24:39.777224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.728 22:24:39 -- host/timeout.sh@57 -- # get_controller 00:24:43.728 22:24:39 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.728 22:24:39 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:43.728 22:24:40 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:43.728 22:24:40 -- host/timeout.sh@58 -- # get_bdev 00:24:43.728 22:24:40 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:43.728 22:24:40 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:43.728 22:24:40 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:43.728 22:24:40 -- host/timeout.sh@61 -- # sleep 5 00:24:45.633 [2024-11-17 22:24:41.777289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.633 [2024-11-17 22:24:41.777370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.633 [2024-11-17 22:24:41.777387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x630dc0 with addr=10.0.0.2, port=4420 00:24:45.633 [2024-11-17 22:24:41.777399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x630dc0 is same with the state(5) to be set 00:24:45.633 [2024-11-17 22:24:41.777417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x630dc0 (9): Bad file descriptor 00:24:45.633 [2024-11-17 22:24:41.777433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.633 [2024-11-17 22:24:41.777441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.633 [2024-11-17 22:24:41.777450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.633 [2024-11-17 22:24:41.777469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.633 [2024-11-17 22:24:41.777479] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.537 [2024-11-17 22:24:43.777495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.537 [2024-11-17 22:24:43.777543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:47.537 [2024-11-17 22:24:43.777553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:47.537 [2024-11-17 22:24:43.777561] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:47.537 [2024-11-17 22:24:43.777580] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.474 00:24:48.474 Latency(us) 00:24:48.474 [2024-11-17T22:24:45.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.474 [2024-11-17T22:24:45.089Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:48.474 Verification LBA range: start 0x0 length 0x4000 00:24:48.474 NVMe0n1 : 8.13 2002.45 7.82 15.74 0.00 63333.42 2204.39 7015926.69 00:24:48.474 [2024-11-17T22:24:45.089Z] =================================================================================================================== 00:24:48.474 [2024-11-17T22:24:45.089Z] Total : 2002.45 7.82 15.74 0.00 63333.42 2204.39 7015926.69 00:24:48.474 0 00:24:48.733 22:24:45 -- host/timeout.sh@62 -- # get_controller 00:24:48.733 22:24:45 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.733 22:24:45 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:48.992 22:24:45 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:48.992 22:24:45 -- host/timeout.sh@63 -- # get_bdev 00:24:48.992 22:24:45 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:48.992 22:24:45 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:49.252 22:24:45 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:49.252 22:24:45 -- host/timeout.sh@65 -- # wait 89869 00:24:49.252 22:24:45 -- host/timeout.sh@67 -- # killprocess 89827 00:24:49.252 22:24:45 -- common/autotest_common.sh@936 -- # '[' -z 89827 ']' 00:24:49.252 22:24:45 -- common/autotest_common.sh@940 -- # kill -0 89827 00:24:49.252 22:24:45 -- common/autotest_common.sh@941 -- # uname 00:24:49.252 22:24:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:49.252 22:24:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89827 00:24:49.252 killing process with pid 89827 00:24:49.252 Received shutdown signal, test time was about 9.138351 seconds 00:24:49.252 00:24:49.252 Latency(us) 00:24:49.252 [2024-11-17T22:24:45.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.252 [2024-11-17T22:24:45.867Z] =================================================================================================================== 00:24:49.252 [2024-11-17T22:24:45.867Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.252 22:24:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:49.252 22:24:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:49.252 22:24:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89827' 00:24:49.252 22:24:45 -- common/autotest_common.sh@955 -- # kill 89827 00:24:49.252 22:24:45 -- common/autotest_common.sh@960 -- # wait 89827 00:24:49.511 22:24:46 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.770 [2024-11-17 22:24:46.329662] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.770 22:24:46 -- host/timeout.sh@74 -- # bdevperf_pid=90032 00:24:49.770 22:24:46 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:49.770 22:24:46 -- host/timeout.sh@76 -- # waitforlisten 90032 /var/tmp/bdevperf.sock 00:24:49.770 22:24:46 -- common/autotest_common.sh@829 -- # '[' -z 90032 ']' 00:24:49.770 22:24:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.770 22:24:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.770 22:24:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.770 22:24:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.770 22:24:46 -- common/autotest_common.sh@10 -- # set +x 00:24:50.029 [2024-11-17 22:24:46.388916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:50.029 [2024-11-17 22:24:46.388991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90032 ] 00:24:50.029 [2024-11-17 22:24:46.521732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.029 [2024-11-17 22:24:46.630432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.966 22:24:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.966 22:24:47 -- common/autotest_common.sh@862 -- # return 0 00:24:50.966 22:24:47 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:50.966 22:24:47 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:51.225 NVMe0n1 00:24:51.225 22:24:47 -- host/timeout.sh@84 -- # rpc_pid=90081 00:24:51.225 22:24:47 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.225 22:24:47 -- host/timeout.sh@86 -- # sleep 1 00:24:51.483 Running I/O for 10 seconds... 00:24:52.442 22:24:48 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.710 [2024-11-17 22:24:49.055878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.710 [2024-11-17 22:24:49.056513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.710 [2024-11-17 22:24:49.056609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.710 [2024-11-17 22:24:49.056673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.056728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.056827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.056889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.056951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.057973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.058947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.059994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415b70 is same with the state(5) to be set 00:24:52.711 [2024-11-17 22:24:49.060956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.711 [2024-11-17 22:24:49.061238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.711 [2024-11-17 22:24:49.061247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.712 [2024-11-17 22:24:49.061322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.712 [2024-11-17 22:24:49.061343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.712 [2024-11-17 22:24:49.061762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.712 [2024-11-17 22:24:49.061805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.712 [2024-11-17 22:24:49.061825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.712 [2024-11-17 22:24:49.061847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.712 [2024-11-17 22:24:49.061889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.061951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.712 [2024-11-17 22:24:49.061972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.061992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.062007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.062019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.062028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.062040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.062049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.062060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.062070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.062082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.062091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.062103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.712 [2024-11-17 22:24:49.062113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.712 [2024-11-17 22:24:49.062125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.713 [2024-11-17 22:24:49.062918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.713 [2024-11-17 22:24:49.062951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.713 [2024-11-17 22:24:49.062961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.062972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.062981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.062992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.714 [2024-11-17 22:24:49.063428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.714 [2024-11-17 22:24:49.063760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c88050 is same with the state(5) to be set 00:24:52.714 [2024-11-17 22:24:49.063783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:52.714 [2024-11-17 22:24:49.063792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:52.714 [2024-11-17 22:24:49.063801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:131032 len:8 PRP1 0x0 PRP2 0x0 00:24:52.714 [2024-11-17 22:24:49.063810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.714 [2024-11-17 22:24:49.063867] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c88050 was disconnected and freed. reset controller. 00:24:52.715 [2024-11-17 22:24:49.064114] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.715 [2024-11-17 22:24:49.064201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12dc0 (9): Bad file descriptor 00:24:52.715 [2024-11-17 22:24:49.067939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12dc0 (9): Bad file descriptor 00:24:52.715 [2024-11-17 22:24:49.067973] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.715 [2024-11-17 22:24:49.067984] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.715 [2024-11-17 22:24:49.068011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.715 [2024-11-17 22:24:49.068033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.715 [2024-11-17 22:24:49.068044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.715 22:24:49 -- host/timeout.sh@90 -- # sleep 1 00:24:53.649 [2024-11-17 22:24:50.068161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.649 [2024-11-17 22:24:50.068265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.649 [2024-11-17 22:24:50.068282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c12dc0 with addr=10.0.0.2, port=4420 00:24:53.649 [2024-11-17 22:24:50.068312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c12dc0 is same with the state(5) to be set 00:24:53.649 [2024-11-17 22:24:50.068336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12dc0 (9): Bad file descriptor 00:24:53.649 [2024-11-17 22:24:50.068355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.649 [2024-11-17 22:24:50.068364] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.649 [2024-11-17 22:24:50.068375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.649 [2024-11-17 22:24:50.068413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.649 [2024-11-17 22:24:50.068427] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.649 22:24:50 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.908 [2024-11-17 22:24:50.329873] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.908 22:24:50 -- host/timeout.sh@92 -- # wait 90081 00:24:54.475 [2024-11-17 22:24:51.079401] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:02.591 00:25:02.591 Latency(us) 00:25:02.591 [2024-11-17T22:24:59.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.591 [2024-11-17T22:24:59.206Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:02.591 Verification LBA range: start 0x0 length 0x4000 00:25:02.591 NVMe0n1 : 10.01 11326.62 44.24 0.00 0.00 11278.51 983.04 3019898.88 00:25:02.591 [2024-11-17T22:24:59.206Z] =================================================================================================================== 00:25:02.591 [2024-11-17T22:24:59.206Z] Total : 11326.62 44.24 0.00 0.00 11278.51 983.04 3019898.88 00:25:02.591 0 00:25:02.591 22:24:57 -- host/timeout.sh@97 -- # rpc_pid=90192 00:25:02.591 22:24:57 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:02.591 22:24:57 -- host/timeout.sh@98 -- # sleep 1 00:25:02.591 Running I/O for 10 seconds... 00:25:02.591 22:24:58 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.591 [2024-11-17 22:24:59.203587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.591 [2024-11-17 22:24:59.203778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.203995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204148] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2272c70 is same with the state(5) to be set 00:25:02.592 [2024-11-17 22:24:59.204468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.592 [2024-11-17 22:24:59.204510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.592 [2024-11-17 22:24:59.204532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.592 [2024-11-17 22:24:59.204543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.851 [2024-11-17 22:24:59.204889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.851 [2024-11-17 22:24:59.204898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.204924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.204934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.204945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.204969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.204980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.204989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.204999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.852 [2024-11-17 22:24:59.205510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.852 [2024-11-17 22:24:59.205547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.852 [2024-11-17 22:24:59.205566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.852 [2024-11-17 22:24:59.205585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.852 [2024-11-17 22:24:59.205718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.852 [2024-11-17 22:24:59.205728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.205963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.205973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.853 [2024-11-17 22:24:59.206569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.853 [2024-11-17 22:24:59.206588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.853 [2024-11-17 22:24:59.206598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.206938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.854 [2024-11-17 22:24:59.206958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.854 [2024-11-17 22:24:59.206977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.206988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.854 [2024-11-17 22:24:59.206997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.854 [2024-11-17 22:24:59.207016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.854 [2024-11-17 22:24:59.207052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.854 [2024-11-17 22:24:59.207071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.854 [2024-11-17 22:24:59.207112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.854 [2024-11-17 22:24:59.207153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.854 [2024-11-17 22:24:59.207192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.854 [2024-11-17 22:24:59.207392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c840b0 is same with the state(5) to be set 00:25:02.854 [2024-11-17 22:24:59.207428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:02.854 [2024-11-17 22:24:59.207436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:02.854 [2024-11-17 22:24:59.207460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11928 len:8 PRP1 0x0 PRP2 0x0 00:25:02.854 [2024-11-17 22:24:59.207469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.854 [2024-11-17 22:24:59.207513] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c840b0 was disconnected and freed. reset controller. 00:25:02.854 [2024-11-17 22:24:59.207730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.854 [2024-11-17 22:24:59.207859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12dc0 (9): Bad file descriptor 00:25:02.854 [2024-11-17 22:24:59.207970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.854 [2024-11-17 22:24:59.208016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.854 [2024-11-17 22:24:59.208036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c12dc0 with addr=10.0.0.2, port=4420 00:25:02.854 [2024-11-17 22:24:59.208047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c12dc0 is same with the state(5) to be set 00:25:02.854 [2024-11-17 22:24:59.208065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12dc0 (9): Bad file descriptor 00:25:02.855 [2024-11-17 22:24:59.208081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.855 [2024-11-17 22:24:59.208091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.855 [2024-11-17 22:24:59.208101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.855 [2024-11-17 22:24:59.208121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.855 [2024-11-17 22:24:59.208165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.855 22:24:59 -- host/timeout.sh@101 -- # sleep 3 00:25:03.855 [2024-11-17 22:25:00.208247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.855 [2024-11-17 22:25:00.208315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.855 [2024-11-17 22:25:00.208331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c12dc0 with addr=10.0.0.2, port=4420 00:25:03.855 [2024-11-17 22:25:00.208342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c12dc0 is same with the state(5) to be set 00:25:03.855 [2024-11-17 22:25:00.208360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12dc0 (9): Bad file descriptor 00:25:03.855 [2024-11-17 22:25:00.208375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.855 [2024-11-17 22:25:00.208383] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.855 [2024-11-17 22:25:00.208392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.855 [2024-11-17 22:25:00.208411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.855 [2024-11-17 22:25:00.208422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.789 [2024-11-17 22:25:01.208479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.789 [2024-11-17 22:25:01.208537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.789 [2024-11-17 22:25:01.208552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c12dc0 with addr=10.0.0.2, port=4420 00:25:04.789 [2024-11-17 22:25:01.208561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c12dc0 is same with the state(5) to be set 00:25:04.789 [2024-11-17 22:25:01.208577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12dc0 (9): Bad file descriptor 00:25:04.789 [2024-11-17 22:25:01.208591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:04.789 [2024-11-17 22:25:01.208598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:04.789 [2024-11-17 22:25:01.208606] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.789 [2024-11-17 22:25:01.208622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.789 [2024-11-17 22:25:01.208632] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.724 [2024-11-17 22:25:02.210533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-11-17 22:25:02.210588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-11-17 22:25:02.210604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c12dc0 with addr=10.0.0.2, port=4420 00:25:05.724 [2024-11-17 22:25:02.210612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c12dc0 is same with the state(5) to be set 00:25:05.724 [2024-11-17 22:25:02.210708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12dc0 (9): Bad file descriptor 00:25:05.724 [2024-11-17 22:25:02.210846] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.724 [2024-11-17 22:25:02.210876] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:05.724 [2024-11-17 22:25:02.210884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.724 [2024-11-17 22:25:02.212835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.724 [2024-11-17 22:25:02.212870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.724 22:25:02 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.983 [2024-11-17 22:25:02.479195] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.983 22:25:02 -- host/timeout.sh@103 -- # wait 90192 00:25:06.920 [2024-11-17 22:25:03.235872] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:12.192 00:25:12.192 Latency(us) 00:25:12.192 [2024-11-17T22:25:08.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.192 [2024-11-17T22:25:08.807Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:12.192 Verification LBA range: start 0x0 length 0x4000 00:25:12.192 NVMe0n1 : 10.01 9624.70 37.60 7333.78 0.00 7535.34 383.53 3019898.88 00:25:12.192 [2024-11-17T22:25:08.807Z] =================================================================================================================== 00:25:12.192 [2024-11-17T22:25:08.807Z] Total : 9624.70 37.60 7333.78 0.00 7535.34 0.00 3019898.88 00:25:12.192 0 00:25:12.192 22:25:08 -- host/timeout.sh@105 -- # killprocess 90032 00:25:12.192 22:25:08 -- common/autotest_common.sh@936 -- # '[' -z 90032 ']' 00:25:12.192 22:25:08 -- common/autotest_common.sh@940 -- # kill -0 90032 00:25:12.192 22:25:08 -- common/autotest_common.sh@941 -- # uname 00:25:12.192 22:25:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:12.192 22:25:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90032 00:25:12.192 killing process with pid 90032 00:25:12.192 Received shutdown signal, test time was about 10.000000 seconds 00:25:12.192 00:25:12.192 Latency(us) 00:25:12.192 [2024-11-17T22:25:08.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.192 [2024-11-17T22:25:08.807Z] =================================================================================================================== 00:25:12.192 [2024-11-17T22:25:08.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.192 22:25:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:12.192 22:25:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:12.192 22:25:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90032' 00:25:12.192 22:25:08 -- common/autotest_common.sh@955 -- # kill 90032 00:25:12.192 22:25:08 -- common/autotest_common.sh@960 -- # wait 90032 00:25:12.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.192 22:25:08 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:12.192 22:25:08 -- host/timeout.sh@110 -- # bdevperf_pid=90324 00:25:12.192 22:25:08 -- host/timeout.sh@112 -- # waitforlisten 90324 /var/tmp/bdevperf.sock 00:25:12.192 22:25:08 -- common/autotest_common.sh@829 -- # '[' -z 90324 ']' 00:25:12.193 22:25:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.193 22:25:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.193 22:25:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.193 22:25:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.193 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:25:12.193 [2024-11-17 22:25:08.496849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:12.193 [2024-11-17 22:25:08.496933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90324 ] 00:25:12.193 [2024-11-17 22:25:08.620329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.193 [2024-11-17 22:25:08.704502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.126 22:25:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.126 22:25:09 -- common/autotest_common.sh@862 -- # return 0 00:25:13.126 22:25:09 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90324 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:13.126 22:25:09 -- host/timeout.sh@116 -- # dtrace_pid=90351 00:25:13.126 22:25:09 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:13.384 22:25:09 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:13.643 NVMe0n1 00:25:13.643 22:25:10 -- host/timeout.sh@124 -- # rpc_pid=90400 00:25:13.643 22:25:10 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:13.643 22:25:10 -- host/timeout.sh@125 -- # sleep 1 00:25:13.643 Running I/O for 10 seconds... 00:25:14.580 22:25:11 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.843 [2024-11-17 22:25:11.350656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350822] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.843 [2024-11-17 22:25:11.350973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.350996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351257] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351642] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.844 [2024-11-17 22:25:11.351683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.351786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276400 is same with the state(5) to be set 00:25:14.845 [2024-11-17 22:25:11.352090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.845 [2024-11-17 22:25:11.352836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.845 [2024-11-17 22:25:11.352845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.352853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.352862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.352870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.352879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.352887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.352897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.352905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.352915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.352923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.352932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.352940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.352949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.352956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.352965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.352983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.352993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.846 [2024-11-17 22:25:11.353615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.846 [2024-11-17 22:25:11.353622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.353961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.353993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.847 [2024-11-17 22:25:11.354462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.847 [2024-11-17 22:25:11.354471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.848 [2024-11-17 22:25:11.354786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2072050 is same with the state(5) to be set 00:25:14.848 [2024-11-17 22:25:11.354812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:14.848 [2024-11-17 22:25:11.354826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:14.848 [2024-11-17 22:25:11.354834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31760 len:8 PRP1 0x0 PRP2 0x0 00:25:14.848 [2024-11-17 22:25:11.354842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.848 [2024-11-17 22:25:11.354895] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2072050 was disconnected and freed. reset controller. 00:25:14.848 [2024-11-17 22:25:11.355170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.848 [2024-11-17 22:25:11.355249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffcdc0 (9): Bad file descriptor 00:25:14.848 [2024-11-17 22:25:11.355355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.848 [2024-11-17 22:25:11.355400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.848 [2024-11-17 22:25:11.355417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffcdc0 with addr=10.0.0.2, port=4420 00:25:14.848 [2024-11-17 22:25:11.355427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffcdc0 is same with the state(5) to be set 00:25:14.848 [2024-11-17 22:25:11.355444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffcdc0 (9): Bad file descriptor 00:25:14.848 [2024-11-17 22:25:11.355460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.848 [2024-11-17 22:25:11.355469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.848 [2024-11-17 22:25:11.355480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.848 [2024-11-17 22:25:11.355497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.848 [2024-11-17 22:25:11.355509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.848 22:25:11 -- host/timeout.sh@128 -- # wait 90400 00:25:16.754 [2024-11-17 22:25:13.355581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.754 [2024-11-17 22:25:13.355663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.754 [2024-11-17 22:25:13.355680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffcdc0 with addr=10.0.0.2, port=4420 00:25:16.754 [2024-11-17 22:25:13.355690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffcdc0 is same with the state(5) to be set 00:25:16.754 [2024-11-17 22:25:13.355707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffcdc0 (9): Bad file descriptor 00:25:16.754 [2024-11-17 22:25:13.355721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.754 [2024-11-17 22:25:13.355729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.754 [2024-11-17 22:25:13.355750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.754 [2024-11-17 22:25:13.355770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.754 [2024-11-17 22:25:13.355779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.288 [2024-11-17 22:25:15.355856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.288 [2024-11-17 22:25:15.355938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.288 [2024-11-17 22:25:15.355954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffcdc0 with addr=10.0.0.2, port=4420 00:25:19.288 [2024-11-17 22:25:15.355965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffcdc0 is same with the state(5) to be set 00:25:19.288 [2024-11-17 22:25:15.355983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffcdc0 (9): Bad file descriptor 00:25:19.288 [2024-11-17 22:25:15.355999] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.288 [2024-11-17 22:25:15.356007] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.288 [2024-11-17 22:25:15.356015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.288 [2024-11-17 22:25:15.356032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.288 [2024-11-17 22:25:15.356050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.191 [2024-11-17 22:25:17.356089] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.191 [2024-11-17 22:25:17.356132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.191 [2024-11-17 22:25:17.356142] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.191 [2024-11-17 22:25:17.356149] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:21.191 [2024-11-17 22:25:17.356166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.758 00:25:21.758 Latency(us) 00:25:21.758 [2024-11-17T22:25:18.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.758 [2024-11-17T22:25:18.373Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:21.758 NVMe0n1 : 8.17 3331.87 13.02 15.66 0.00 38185.23 2949.12 7015926.69 00:25:21.758 [2024-11-17T22:25:18.373Z] =================================================================================================================== 00:25:21.758 [2024-11-17T22:25:18.373Z] Total : 3331.87 13.02 15.66 0.00 38185.23 2949.12 7015926.69 00:25:21.758 0 00:25:22.017 22:25:18 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:22.017 Attaching 5 probes... 00:25:22.017 1344.706732: reset bdev controller NVMe0 00:25:22.017 1344.835542: reconnect bdev controller NVMe0 00:25:22.017 3345.068838: reconnect delay bdev controller NVMe0 00:25:22.017 3345.080702: reconnect bdev controller NVMe0 00:25:22.017 5345.340822: reconnect delay bdev controller NVMe0 00:25:22.017 5345.351345: reconnect bdev controller NVMe0 00:25:22.017 7345.610981: reconnect delay bdev controller NVMe0 00:25:22.017 7345.622680: reconnect bdev controller NVMe0 00:25:22.017 22:25:18 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:22.017 22:25:18 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:22.017 22:25:18 -- host/timeout.sh@136 -- # kill 90351 00:25:22.017 22:25:18 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:22.017 22:25:18 -- host/timeout.sh@139 -- # killprocess 90324 00:25:22.017 22:25:18 -- common/autotest_common.sh@936 -- # '[' -z 90324 ']' 00:25:22.017 22:25:18 -- common/autotest_common.sh@940 -- # kill -0 90324 00:25:22.017 22:25:18 -- common/autotest_common.sh@941 -- # uname 00:25:22.017 22:25:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.017 22:25:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90324 00:25:22.017 22:25:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:22.017 22:25:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:22.017 22:25:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90324' 00:25:22.017 killing process with pid 90324 00:25:22.017 22:25:18 -- common/autotest_common.sh@955 -- # kill 90324 00:25:22.017 Received shutdown signal, test time was about 8.238788 seconds 00:25:22.017 00:25:22.017 Latency(us) 00:25:22.017 [2024-11-17T22:25:18.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.017 [2024-11-17T22:25:18.632Z] =================================================================================================================== 00:25:22.017 [2024-11-17T22:25:18.632Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:22.017 22:25:18 -- common/autotest_common.sh@960 -- # wait 90324 00:25:22.276 22:25:18 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.535 22:25:18 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:22.535 22:25:18 -- host/timeout.sh@145 -- # nvmftestfini 00:25:22.535 22:25:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:22.535 22:25:18 -- nvmf/common.sh@116 -- # sync 00:25:22.535 22:25:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:22.535 22:25:18 -- nvmf/common.sh@119 -- # set +e 00:25:22.535 22:25:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:22.535 22:25:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:22.535 rmmod nvme_tcp 00:25:22.535 rmmod nvme_fabrics 00:25:22.535 rmmod nvme_keyring 00:25:22.535 22:25:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:22.535 22:25:19 -- nvmf/common.sh@123 -- # set -e 00:25:22.535 22:25:19 -- nvmf/common.sh@124 -- # return 0 00:25:22.535 22:25:19 -- nvmf/common.sh@477 -- # '[' -n 89736 ']' 00:25:22.535 22:25:19 -- nvmf/common.sh@478 -- # killprocess 89736 00:25:22.535 22:25:19 -- common/autotest_common.sh@936 -- # '[' -z 89736 ']' 00:25:22.535 22:25:19 -- common/autotest_common.sh@940 -- # kill -0 89736 00:25:22.535 22:25:19 -- common/autotest_common.sh@941 -- # uname 00:25:22.535 22:25:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.535 22:25:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89736 00:25:22.535 22:25:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:22.535 killing process with pid 89736 00:25:22.535 22:25:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:22.535 22:25:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89736' 00:25:22.535 22:25:19 -- common/autotest_common.sh@955 -- # kill 89736 00:25:22.535 22:25:19 -- common/autotest_common.sh@960 -- # wait 89736 00:25:22.806 22:25:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:22.806 22:25:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:22.806 22:25:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:22.806 22:25:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.806 22:25:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:22.806 22:25:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.806 22:25:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.806 22:25:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.806 22:25:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:22.806 00:25:22.806 real 0m47.221s 00:25:22.806 user 2m17.650s 00:25:22.806 sys 0m5.650s 00:25:22.806 22:25:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:22.806 22:25:19 -- common/autotest_common.sh@10 -- # set +x 00:25:22.806 ************************************ 00:25:22.806 END TEST nvmf_timeout 00:25:22.806 ************************************ 00:25:22.806 22:25:19 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:22.806 22:25:19 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:22.806 22:25:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.806 22:25:19 -- common/autotest_common.sh@10 -- # set +x 00:25:22.806 22:25:19 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:22.806 00:25:22.806 real 18m46.648s 00:25:22.806 user 60m22.960s 00:25:22.806 sys 3m48.695s 00:25:22.806 22:25:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:22.806 22:25:19 -- common/autotest_common.sh@10 -- # set +x 00:25:22.806 ************************************ 00:25:22.806 END TEST nvmf_tcp 00:25:22.806 ************************************ 00:25:23.065 22:25:19 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:23.065 22:25:19 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:23.065 22:25:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:23.065 22:25:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:23.065 22:25:19 -- common/autotest_common.sh@10 -- # set +x 00:25:23.065 ************************************ 00:25:23.065 START TEST spdkcli_nvmf_tcp 00:25:23.065 ************************************ 00:25:23.065 22:25:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:23.065 * Looking for test storage... 00:25:23.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:23.065 22:25:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:23.065 22:25:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:23.065 22:25:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:23.065 22:25:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:23.065 22:25:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:23.065 22:25:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:23.065 22:25:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:23.065 22:25:19 -- scripts/common.sh@335 -- # IFS=.-: 00:25:23.065 22:25:19 -- scripts/common.sh@335 -- # read -ra ver1 00:25:23.065 22:25:19 -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.065 22:25:19 -- scripts/common.sh@336 -- # read -ra ver2 00:25:23.065 22:25:19 -- scripts/common.sh@337 -- # local 'op=<' 00:25:23.065 22:25:19 -- scripts/common.sh@339 -- # ver1_l=2 00:25:23.065 22:25:19 -- scripts/common.sh@340 -- # ver2_l=1 00:25:23.065 22:25:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:23.065 22:25:19 -- scripts/common.sh@343 -- # case "$op" in 00:25:23.065 22:25:19 -- scripts/common.sh@344 -- # : 1 00:25:23.065 22:25:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:23.065 22:25:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.065 22:25:19 -- scripts/common.sh@364 -- # decimal 1 00:25:23.065 22:25:19 -- scripts/common.sh@352 -- # local d=1 00:25:23.065 22:25:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.065 22:25:19 -- scripts/common.sh@354 -- # echo 1 00:25:23.065 22:25:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:23.065 22:25:19 -- scripts/common.sh@365 -- # decimal 2 00:25:23.065 22:25:19 -- scripts/common.sh@352 -- # local d=2 00:25:23.065 22:25:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.065 22:25:19 -- scripts/common.sh@354 -- # echo 2 00:25:23.065 22:25:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:23.065 22:25:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:23.065 22:25:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:23.065 22:25:19 -- scripts/common.sh@367 -- # return 0 00:25:23.065 22:25:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.065 22:25:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:23.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.065 --rc genhtml_branch_coverage=1 00:25:23.065 --rc genhtml_function_coverage=1 00:25:23.065 --rc genhtml_legend=1 00:25:23.065 --rc geninfo_all_blocks=1 00:25:23.065 --rc geninfo_unexecuted_blocks=1 00:25:23.065 00:25:23.065 ' 00:25:23.065 22:25:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:23.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.065 --rc genhtml_branch_coverage=1 00:25:23.065 --rc genhtml_function_coverage=1 00:25:23.065 --rc genhtml_legend=1 00:25:23.065 --rc geninfo_all_blocks=1 00:25:23.065 --rc geninfo_unexecuted_blocks=1 00:25:23.065 00:25:23.065 ' 00:25:23.065 22:25:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:23.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.065 --rc genhtml_branch_coverage=1 00:25:23.065 --rc genhtml_function_coverage=1 00:25:23.065 --rc genhtml_legend=1 00:25:23.065 --rc geninfo_all_blocks=1 00:25:23.065 --rc geninfo_unexecuted_blocks=1 00:25:23.065 00:25:23.065 ' 00:25:23.065 22:25:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:23.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.065 --rc genhtml_branch_coverage=1 00:25:23.065 --rc genhtml_function_coverage=1 00:25:23.065 --rc genhtml_legend=1 00:25:23.065 --rc geninfo_all_blocks=1 00:25:23.065 --rc geninfo_unexecuted_blocks=1 00:25:23.065 00:25:23.065 ' 00:25:23.065 22:25:19 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:23.065 22:25:19 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:23.065 22:25:19 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:23.065 22:25:19 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:23.065 22:25:19 -- nvmf/common.sh@7 -- # uname -s 00:25:23.065 22:25:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.065 22:25:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.065 22:25:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.065 22:25:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.065 22:25:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.065 22:25:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.065 22:25:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.065 22:25:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.065 22:25:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.065 22:25:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.065 22:25:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:25:23.065 22:25:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:25:23.065 22:25:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.065 22:25:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.065 22:25:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:23.065 22:25:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:23.065 22:25:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.065 22:25:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.065 22:25:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.065 22:25:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.065 22:25:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.065 22:25:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.065 22:25:19 -- paths/export.sh@5 -- # export PATH 00:25:23.065 22:25:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.065 22:25:19 -- nvmf/common.sh@46 -- # : 0 00:25:23.065 22:25:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:23.065 22:25:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:23.065 22:25:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:23.065 22:25:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.065 22:25:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.065 22:25:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:23.066 22:25:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:23.066 22:25:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:23.066 22:25:19 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:23.066 22:25:19 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:23.066 22:25:19 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:23.066 22:25:19 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:23.066 22:25:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.066 22:25:19 -- common/autotest_common.sh@10 -- # set +x 00:25:23.066 22:25:19 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:23.325 22:25:19 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=90632 00:25:23.325 22:25:19 -- spdkcli/common.sh@34 -- # waitforlisten 90632 00:25:23.325 22:25:19 -- common/autotest_common.sh@829 -- # '[' -z 90632 ']' 00:25:23.325 22:25:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.325 22:25:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.325 22:25:19 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:23.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.325 22:25:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.325 22:25:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.325 22:25:19 -- common/autotest_common.sh@10 -- # set +x 00:25:23.325 [2024-11-17 22:25:19.724516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:23.325 [2024-11-17 22:25:19.724622] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90632 ] 00:25:23.325 [2024-11-17 22:25:19.859002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:23.583 [2024-11-17 22:25:19.964562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:23.583 [2024-11-17 22:25:19.964906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.583 [2024-11-17 22:25:19.964919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.151 22:25:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:24.151 22:25:20 -- common/autotest_common.sh@862 -- # return 0 00:25:24.151 22:25:20 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:24.151 22:25:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.151 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 22:25:20 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:24.410 22:25:20 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:24.410 22:25:20 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:24.410 22:25:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:24.410 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 22:25:20 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:24.410 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:24.410 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:24.410 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:24.410 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:24.410 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:24.410 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:24.410 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:24.410 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:24.410 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:24.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:24.410 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:24.410 ' 00:25:24.667 [2024-11-17 22:25:21.224435] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:27.201 [2024-11-17 22:25:23.474449] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.587 [2024-11-17 22:25:24.764317] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:31.164 [2024-11-17 22:25:27.163439] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:33.067 [2024-11-17 22:25:29.226520] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:34.444 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:34.444 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:34.444 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:34.444 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:34.444 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:34.444 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:34.444 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:34.444 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:34.444 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:34.445 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:34.445 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:34.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:34.445 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:34.445 22:25:30 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:34.445 22:25:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:34.445 22:25:30 -- common/autotest_common.sh@10 -- # set +x 00:25:34.445 22:25:30 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:34.445 22:25:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.445 22:25:30 -- common/autotest_common.sh@10 -- # set +x 00:25:34.445 22:25:30 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:34.445 22:25:30 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:35.012 22:25:31 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:35.012 22:25:31 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:35.012 22:25:31 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:35.013 22:25:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:35.013 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:25:35.013 22:25:31 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:35.013 22:25:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:35.013 22:25:31 -- common/autotest_common.sh@10 -- # set +x 00:25:35.013 22:25:31 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:35.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:35.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:35.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:35.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:35.013 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:35.013 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:35.013 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:35.013 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:35.013 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:35.013 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:35.013 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:35.013 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:35.013 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:35.013 ' 00:25:40.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:40.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:40.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:40.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:40.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:40.283 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:40.283 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:40.283 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:40.283 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:40.283 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:40.283 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:40.283 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:40.283 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:40.283 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:40.542 22:25:36 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:40.542 22:25:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.542 22:25:36 -- common/autotest_common.sh@10 -- # set +x 00:25:40.542 22:25:37 -- spdkcli/nvmf.sh@90 -- # killprocess 90632 00:25:40.542 22:25:37 -- common/autotest_common.sh@936 -- # '[' -z 90632 ']' 00:25:40.542 22:25:37 -- common/autotest_common.sh@940 -- # kill -0 90632 00:25:40.542 22:25:37 -- common/autotest_common.sh@941 -- # uname 00:25:40.542 22:25:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:40.542 22:25:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90632 00:25:40.542 22:25:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:40.542 22:25:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:40.542 killing process with pid 90632 00:25:40.542 22:25:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90632' 00:25:40.542 22:25:37 -- common/autotest_common.sh@955 -- # kill 90632 00:25:40.542 [2024-11-17 22:25:37.064412] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:40.542 22:25:37 -- common/autotest_common.sh@960 -- # wait 90632 00:25:40.801 22:25:37 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:40.801 22:25:37 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:40.801 22:25:37 -- spdkcli/common.sh@13 -- # '[' -n 90632 ']' 00:25:40.801 22:25:37 -- spdkcli/common.sh@14 -- # killprocess 90632 00:25:40.801 22:25:37 -- common/autotest_common.sh@936 -- # '[' -z 90632 ']' 00:25:40.801 22:25:37 -- common/autotest_common.sh@940 -- # kill -0 90632 00:25:40.801 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (90632) - No such process 00:25:40.801 Process with pid 90632 is not found 00:25:40.801 22:25:37 -- common/autotest_common.sh@963 -- # echo 'Process with pid 90632 is not found' 00:25:40.801 22:25:37 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:40.801 22:25:37 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:40.801 22:25:37 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:40.801 00:25:40.801 real 0m17.830s 00:25:40.801 user 0m38.624s 00:25:40.801 sys 0m0.886s 00:25:40.801 22:25:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:40.801 22:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:40.801 ************************************ 00:25:40.801 END TEST spdkcli_nvmf_tcp 00:25:40.801 ************************************ 00:25:40.801 22:25:37 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:40.801 22:25:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:40.801 22:25:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:40.801 22:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:40.801 ************************************ 00:25:40.801 START TEST nvmf_identify_passthru 00:25:40.801 ************************************ 00:25:40.801 22:25:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:41.061 * Looking for test storage... 00:25:41.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:41.061 22:25:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:41.061 22:25:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:41.061 22:25:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:41.061 22:25:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:41.061 22:25:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:41.061 22:25:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:41.061 22:25:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:41.061 22:25:37 -- scripts/common.sh@335 -- # IFS=.-: 00:25:41.061 22:25:37 -- scripts/common.sh@335 -- # read -ra ver1 00:25:41.061 22:25:37 -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.061 22:25:37 -- scripts/common.sh@336 -- # read -ra ver2 00:25:41.061 22:25:37 -- scripts/common.sh@337 -- # local 'op=<' 00:25:41.061 22:25:37 -- scripts/common.sh@339 -- # ver1_l=2 00:25:41.061 22:25:37 -- scripts/common.sh@340 -- # ver2_l=1 00:25:41.061 22:25:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:41.061 22:25:37 -- scripts/common.sh@343 -- # case "$op" in 00:25:41.061 22:25:37 -- scripts/common.sh@344 -- # : 1 00:25:41.061 22:25:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:41.061 22:25:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.061 22:25:37 -- scripts/common.sh@364 -- # decimal 1 00:25:41.061 22:25:37 -- scripts/common.sh@352 -- # local d=1 00:25:41.061 22:25:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.061 22:25:37 -- scripts/common.sh@354 -- # echo 1 00:25:41.061 22:25:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:41.061 22:25:37 -- scripts/common.sh@365 -- # decimal 2 00:25:41.061 22:25:37 -- scripts/common.sh@352 -- # local d=2 00:25:41.061 22:25:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.061 22:25:37 -- scripts/common.sh@354 -- # echo 2 00:25:41.061 22:25:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:41.061 22:25:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:41.061 22:25:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:41.061 22:25:37 -- scripts/common.sh@367 -- # return 0 00:25:41.061 22:25:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.061 22:25:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:41.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.061 --rc genhtml_branch_coverage=1 00:25:41.061 --rc genhtml_function_coverage=1 00:25:41.061 --rc genhtml_legend=1 00:25:41.061 --rc geninfo_all_blocks=1 00:25:41.061 --rc geninfo_unexecuted_blocks=1 00:25:41.061 00:25:41.061 ' 00:25:41.061 22:25:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:41.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.061 --rc genhtml_branch_coverage=1 00:25:41.061 --rc genhtml_function_coverage=1 00:25:41.061 --rc genhtml_legend=1 00:25:41.061 --rc geninfo_all_blocks=1 00:25:41.061 --rc geninfo_unexecuted_blocks=1 00:25:41.061 00:25:41.061 ' 00:25:41.061 22:25:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:41.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.061 --rc genhtml_branch_coverage=1 00:25:41.061 --rc genhtml_function_coverage=1 00:25:41.061 --rc genhtml_legend=1 00:25:41.061 --rc geninfo_all_blocks=1 00:25:41.061 --rc geninfo_unexecuted_blocks=1 00:25:41.061 00:25:41.061 ' 00:25:41.061 22:25:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:41.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.061 --rc genhtml_branch_coverage=1 00:25:41.061 --rc genhtml_function_coverage=1 00:25:41.061 --rc genhtml_legend=1 00:25:41.061 --rc geninfo_all_blocks=1 00:25:41.061 --rc geninfo_unexecuted_blocks=1 00:25:41.061 00:25:41.061 ' 00:25:41.061 22:25:37 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:41.061 22:25:37 -- nvmf/common.sh@7 -- # uname -s 00:25:41.061 22:25:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.061 22:25:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.061 22:25:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.061 22:25:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.061 22:25:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.061 22:25:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.061 22:25:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.061 22:25:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.061 22:25:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.061 22:25:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.061 22:25:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:25:41.061 22:25:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:25:41.061 22:25:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.061 22:25:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.061 22:25:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:41.061 22:25:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:41.061 22:25:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.061 22:25:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.061 22:25:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.061 22:25:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.061 22:25:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.061 22:25:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.061 22:25:37 -- paths/export.sh@5 -- # export PATH 00:25:41.061 22:25:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.061 22:25:37 -- nvmf/common.sh@46 -- # : 0 00:25:41.061 22:25:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:41.061 22:25:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:41.061 22:25:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:41.061 22:25:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.061 22:25:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.061 22:25:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:41.061 22:25:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:41.061 22:25:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:41.061 22:25:37 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:41.061 22:25:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.061 22:25:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.061 22:25:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.061 22:25:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.061 22:25:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.061 22:25:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.061 22:25:37 -- paths/export.sh@5 -- # export PATH 00:25:41.061 22:25:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.061 22:25:37 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:41.061 22:25:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:41.061 22:25:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.061 22:25:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:41.061 22:25:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:41.061 22:25:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:41.061 22:25:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.061 22:25:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:41.061 22:25:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.061 22:25:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:41.061 22:25:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:41.061 22:25:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:41.061 22:25:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:41.061 22:25:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:41.061 22:25:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:41.061 22:25:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.061 22:25:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.061 22:25:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:41.061 22:25:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:41.061 22:25:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:41.061 22:25:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:41.061 22:25:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:41.061 22:25:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.061 22:25:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:41.061 22:25:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:41.061 22:25:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:41.061 22:25:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:41.061 22:25:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:41.061 22:25:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:41.061 Cannot find device "nvmf_tgt_br" 00:25:41.061 22:25:37 -- nvmf/common.sh@154 -- # true 00:25:41.061 22:25:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:41.061 Cannot find device "nvmf_tgt_br2" 00:25:41.061 22:25:37 -- nvmf/common.sh@155 -- # true 00:25:41.061 22:25:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:41.061 22:25:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:41.061 Cannot find device "nvmf_tgt_br" 00:25:41.061 22:25:37 -- nvmf/common.sh@157 -- # true 00:25:41.061 22:25:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:41.061 Cannot find device "nvmf_tgt_br2" 00:25:41.061 22:25:37 -- nvmf/common.sh@158 -- # true 00:25:41.061 22:25:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:41.320 22:25:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:41.320 22:25:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:41.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:41.320 22:25:37 -- nvmf/common.sh@161 -- # true 00:25:41.320 22:25:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:41.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:41.320 22:25:37 -- nvmf/common.sh@162 -- # true 00:25:41.320 22:25:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:41.320 22:25:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:41.320 22:25:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:41.320 22:25:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:41.320 22:25:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:41.320 22:25:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:41.320 22:25:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:41.320 22:25:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:41.320 22:25:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:41.320 22:25:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:41.320 22:25:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:41.320 22:25:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:41.320 22:25:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:41.321 22:25:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:41.321 22:25:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:41.321 22:25:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:41.321 22:25:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:41.321 22:25:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:41.321 22:25:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:41.321 22:25:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:41.321 22:25:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:41.321 22:25:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:41.321 22:25:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:41.321 22:25:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:41.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:25:41.321 00:25:41.321 --- 10.0.0.2 ping statistics --- 00:25:41.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.321 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:41.321 22:25:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:41.321 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:41.321 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:25:41.321 00:25:41.321 --- 10.0.0.3 ping statistics --- 00:25:41.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.321 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:41.321 22:25:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:41.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:25:41.321 00:25:41.321 --- 10.0.0.1 ping statistics --- 00:25:41.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.321 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:25:41.321 22:25:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.321 22:25:37 -- nvmf/common.sh@421 -- # return 0 00:25:41.321 22:25:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:41.321 22:25:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.321 22:25:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:41.321 22:25:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:41.321 22:25:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.321 22:25:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:41.321 22:25:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:41.579 22:25:37 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:41.579 22:25:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.579 22:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:41.579 22:25:37 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:41.579 22:25:37 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:41.579 22:25:37 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:41.579 22:25:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:41.580 22:25:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:41.580 22:25:37 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:41.580 22:25:37 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:41.580 22:25:37 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:41.580 22:25:37 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:41.580 22:25:37 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:41.580 22:25:38 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:41.580 22:25:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:41.580 22:25:38 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:41.580 22:25:38 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:41.580 22:25:38 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:41.580 22:25:38 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:41.580 22:25:38 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:41.580 22:25:38 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:41.580 22:25:38 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:41.839 22:25:38 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:41.839 22:25:38 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:41.839 22:25:38 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:41.839 22:25:38 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:41.839 22:25:38 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:41.839 22:25:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.839 22:25:38 -- common/autotest_common.sh@10 -- # set +x 00:25:41.839 22:25:38 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:41.839 22:25:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.839 22:25:38 -- common/autotest_common.sh@10 -- # set +x 00:25:41.839 22:25:38 -- target/identify_passthru.sh@31 -- # nvmfpid=91131 00:25:41.839 22:25:38 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.839 22:25:38 -- target/identify_passthru.sh@35 -- # waitforlisten 91131 00:25:41.839 22:25:38 -- common/autotest_common.sh@829 -- # '[' -z 91131 ']' 00:25:41.839 22:25:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.839 22:25:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.839 22:25:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.839 22:25:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.839 22:25:38 -- common/autotest_common.sh@10 -- # set +x 00:25:41.839 22:25:38 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:42.097 [2024-11-17 22:25:38.508719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:42.097 [2024-11-17 22:25:38.508844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.097 [2024-11-17 22:25:38.648904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.356 [2024-11-17 22:25:38.763594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:42.356 [2024-11-17 22:25:38.763786] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.356 [2024-11-17 22:25:38.763803] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.356 [2024-11-17 22:25:38.763815] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.356 [2024-11-17 22:25:38.763929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.356 [2024-11-17 22:25:38.764564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.356 [2024-11-17 22:25:38.764697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.356 [2024-11-17 22:25:38.764911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.932 22:25:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.932 22:25:39 -- common/autotest_common.sh@862 -- # return 0 00:25:42.932 22:25:39 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:42.932 22:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.932 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:42.932 22:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.932 22:25:39 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:42.932 22:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.932 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:43.191 [2024-11-17 22:25:39.609555] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:43.191 22:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.191 22:25:39 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:43.191 22:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.191 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:43.191 [2024-11-17 22:25:39.623580] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.191 22:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.191 22:25:39 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:43.191 22:25:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:43.191 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:43.191 22:25:39 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:43.191 22:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.191 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:43.191 Nvme0n1 00:25:43.191 22:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.191 22:25:39 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:43.191 22:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.191 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:43.191 22:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.191 22:25:39 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:43.191 22:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.191 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:43.191 22:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.191 22:25:39 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.191 22:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.191 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:43.191 [2024-11-17 22:25:39.769825] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.191 22:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.191 22:25:39 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:43.191 22:25:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.191 22:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:43.191 [2024-11-17 22:25:39.777567] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:43.191 [ 00:25:43.191 { 00:25:43.191 "allow_any_host": true, 00:25:43.191 "hosts": [], 00:25:43.191 "listen_addresses": [], 00:25:43.191 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:43.191 "subtype": "Discovery" 00:25:43.191 }, 00:25:43.191 { 00:25:43.191 "allow_any_host": true, 00:25:43.191 "hosts": [], 00:25:43.191 "listen_addresses": [ 00:25:43.191 { 00:25:43.191 "adrfam": "IPv4", 00:25:43.191 "traddr": "10.0.0.2", 00:25:43.191 "transport": "TCP", 00:25:43.191 "trsvcid": "4420", 00:25:43.191 "trtype": "TCP" 00:25:43.191 } 00:25:43.191 ], 00:25:43.191 "max_cntlid": 65519, 00:25:43.191 "max_namespaces": 1, 00:25:43.191 "min_cntlid": 1, 00:25:43.191 "model_number": "SPDK bdev Controller", 00:25:43.191 "namespaces": [ 00:25:43.191 { 00:25:43.191 "bdev_name": "Nvme0n1", 00:25:43.191 "name": "Nvme0n1", 00:25:43.191 "nguid": "EBD22FF52750472C9F4823C9B16148C3", 00:25:43.191 "nsid": 1, 00:25:43.191 "uuid": "ebd22ff5-2750-472c-9f48-23c9b16148c3" 00:25:43.191 } 00:25:43.191 ], 00:25:43.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:43.191 "serial_number": "SPDK00000000000001", 00:25:43.191 "subtype": "NVMe" 00:25:43.191 } 00:25:43.191 ] 00:25:43.191 22:25:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.191 22:25:39 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:43.191 22:25:39 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.191 22:25:39 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:43.450 22:25:40 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:43.450 22:25:40 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.450 22:25:40 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:43.450 22:25:40 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:43.708 22:25:40 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:43.708 22:25:40 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:43.708 22:25:40 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:43.708 22:25:40 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.708 22:25:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.708 22:25:40 -- common/autotest_common.sh@10 -- # set +x 00:25:43.709 22:25:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.709 22:25:40 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:43.709 22:25:40 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:43.709 22:25:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:43.709 22:25:40 -- nvmf/common.sh@116 -- # sync 00:25:43.709 22:25:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:43.709 22:25:40 -- nvmf/common.sh@119 -- # set +e 00:25:43.709 22:25:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:43.709 22:25:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:43.709 rmmod nvme_tcp 00:25:43.709 rmmod nvme_fabrics 00:25:43.971 rmmod nvme_keyring 00:25:43.971 22:25:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:43.971 22:25:40 -- nvmf/common.sh@123 -- # set -e 00:25:43.971 22:25:40 -- nvmf/common.sh@124 -- # return 0 00:25:43.971 22:25:40 -- nvmf/common.sh@477 -- # '[' -n 91131 ']' 00:25:43.971 22:25:40 -- nvmf/common.sh@478 -- # killprocess 91131 00:25:43.971 22:25:40 -- common/autotest_common.sh@936 -- # '[' -z 91131 ']' 00:25:43.971 22:25:40 -- common/autotest_common.sh@940 -- # kill -0 91131 00:25:43.971 22:25:40 -- common/autotest_common.sh@941 -- # uname 00:25:43.971 22:25:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.971 22:25:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91131 00:25:43.971 22:25:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:43.971 22:25:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:43.971 killing process with pid 91131 00:25:43.971 22:25:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91131' 00:25:43.971 22:25:40 -- common/autotest_common.sh@955 -- # kill 91131 00:25:43.971 [2024-11-17 22:25:40.385282] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:43.971 22:25:40 -- common/autotest_common.sh@960 -- # wait 91131 00:25:44.231 22:25:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:44.231 22:25:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:44.231 22:25:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:44.231 22:25:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.231 22:25:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:44.231 22:25:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.231 22:25:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:44.231 22:25:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.231 22:25:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:44.231 ************************************ 00:25:44.231 END TEST nvmf_identify_passthru 00:25:44.231 ************************************ 00:25:44.231 00:25:44.231 real 0m3.321s 00:25:44.231 user 0m7.989s 00:25:44.231 sys 0m0.887s 00:25:44.231 22:25:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:44.231 22:25:40 -- common/autotest_common.sh@10 -- # set +x 00:25:44.231 22:25:40 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:44.231 22:25:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:44.231 22:25:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.231 22:25:40 -- common/autotest_common.sh@10 -- # set +x 00:25:44.231 ************************************ 00:25:44.231 START TEST nvmf_dif 00:25:44.231 ************************************ 00:25:44.231 22:25:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:44.231 * Looking for test storage... 00:25:44.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:44.231 22:25:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:44.231 22:25:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:44.231 22:25:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:44.491 22:25:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:44.491 22:25:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:44.491 22:25:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:44.491 22:25:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:44.491 22:25:40 -- scripts/common.sh@335 -- # IFS=.-: 00:25:44.491 22:25:40 -- scripts/common.sh@335 -- # read -ra ver1 00:25:44.491 22:25:40 -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.491 22:25:40 -- scripts/common.sh@336 -- # read -ra ver2 00:25:44.491 22:25:40 -- scripts/common.sh@337 -- # local 'op=<' 00:25:44.491 22:25:40 -- scripts/common.sh@339 -- # ver1_l=2 00:25:44.491 22:25:40 -- scripts/common.sh@340 -- # ver2_l=1 00:25:44.491 22:25:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:44.491 22:25:40 -- scripts/common.sh@343 -- # case "$op" in 00:25:44.491 22:25:40 -- scripts/common.sh@344 -- # : 1 00:25:44.491 22:25:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:44.491 22:25:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.491 22:25:40 -- scripts/common.sh@364 -- # decimal 1 00:25:44.491 22:25:40 -- scripts/common.sh@352 -- # local d=1 00:25:44.491 22:25:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.491 22:25:40 -- scripts/common.sh@354 -- # echo 1 00:25:44.491 22:25:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:44.491 22:25:40 -- scripts/common.sh@365 -- # decimal 2 00:25:44.491 22:25:40 -- scripts/common.sh@352 -- # local d=2 00:25:44.491 22:25:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.491 22:25:40 -- scripts/common.sh@354 -- # echo 2 00:25:44.491 22:25:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:44.491 22:25:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:44.491 22:25:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:44.491 22:25:40 -- scripts/common.sh@367 -- # return 0 00:25:44.491 22:25:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.491 22:25:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:44.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.491 --rc genhtml_branch_coverage=1 00:25:44.491 --rc genhtml_function_coverage=1 00:25:44.491 --rc genhtml_legend=1 00:25:44.491 --rc geninfo_all_blocks=1 00:25:44.491 --rc geninfo_unexecuted_blocks=1 00:25:44.491 00:25:44.491 ' 00:25:44.491 22:25:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:44.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.491 --rc genhtml_branch_coverage=1 00:25:44.491 --rc genhtml_function_coverage=1 00:25:44.491 --rc genhtml_legend=1 00:25:44.491 --rc geninfo_all_blocks=1 00:25:44.491 --rc geninfo_unexecuted_blocks=1 00:25:44.491 00:25:44.491 ' 00:25:44.491 22:25:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:44.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.491 --rc genhtml_branch_coverage=1 00:25:44.491 --rc genhtml_function_coverage=1 00:25:44.491 --rc genhtml_legend=1 00:25:44.491 --rc geninfo_all_blocks=1 00:25:44.491 --rc geninfo_unexecuted_blocks=1 00:25:44.491 00:25:44.491 ' 00:25:44.491 22:25:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:44.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.491 --rc genhtml_branch_coverage=1 00:25:44.491 --rc genhtml_function_coverage=1 00:25:44.491 --rc genhtml_legend=1 00:25:44.491 --rc geninfo_all_blocks=1 00:25:44.491 --rc geninfo_unexecuted_blocks=1 00:25:44.491 00:25:44.491 ' 00:25:44.491 22:25:40 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:44.491 22:25:40 -- nvmf/common.sh@7 -- # uname -s 00:25:44.491 22:25:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.491 22:25:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.491 22:25:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.491 22:25:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.491 22:25:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.491 22:25:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.491 22:25:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.491 22:25:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.491 22:25:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.491 22:25:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.491 22:25:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:25:44.491 22:25:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:25:44.491 22:25:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.491 22:25:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.491 22:25:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:44.491 22:25:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:44.491 22:25:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.491 22:25:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.491 22:25:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.491 22:25:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.491 22:25:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.492 22:25:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.492 22:25:40 -- paths/export.sh@5 -- # export PATH 00:25:44.492 22:25:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.492 22:25:40 -- nvmf/common.sh@46 -- # : 0 00:25:44.492 22:25:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:44.492 22:25:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:44.492 22:25:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:44.492 22:25:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.492 22:25:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.492 22:25:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:44.492 22:25:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:44.492 22:25:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:44.492 22:25:40 -- target/dif.sh@15 -- # NULL_META=16 00:25:44.492 22:25:40 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:44.492 22:25:40 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:44.492 22:25:40 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:44.492 22:25:40 -- target/dif.sh@135 -- # nvmftestinit 00:25:44.492 22:25:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:44.492 22:25:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.492 22:25:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:44.492 22:25:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:44.492 22:25:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:44.492 22:25:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.492 22:25:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:44.492 22:25:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.492 22:25:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:44.492 22:25:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:44.492 22:25:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:44.492 22:25:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:44.492 22:25:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:44.492 22:25:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:44.492 22:25:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.492 22:25:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.492 22:25:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:44.492 22:25:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:44.492 22:25:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:44.492 22:25:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:44.492 22:25:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:44.492 22:25:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.492 22:25:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:44.492 22:25:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:44.492 22:25:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:44.492 22:25:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:44.492 22:25:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:44.492 22:25:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:44.492 Cannot find device "nvmf_tgt_br" 00:25:44.492 22:25:40 -- nvmf/common.sh@154 -- # true 00:25:44.492 22:25:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:44.492 Cannot find device "nvmf_tgt_br2" 00:25:44.492 22:25:40 -- nvmf/common.sh@155 -- # true 00:25:44.492 22:25:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:44.492 22:25:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:44.492 Cannot find device "nvmf_tgt_br" 00:25:44.492 22:25:40 -- nvmf/common.sh@157 -- # true 00:25:44.492 22:25:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:44.492 Cannot find device "nvmf_tgt_br2" 00:25:44.492 22:25:40 -- nvmf/common.sh@158 -- # true 00:25:44.492 22:25:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:44.492 22:25:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:44.492 22:25:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:44.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:44.492 22:25:41 -- nvmf/common.sh@161 -- # true 00:25:44.492 22:25:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:44.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:44.492 22:25:41 -- nvmf/common.sh@162 -- # true 00:25:44.492 22:25:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:44.492 22:25:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:44.492 22:25:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:44.492 22:25:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:44.492 22:25:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:44.752 22:25:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:44.752 22:25:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:44.752 22:25:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:44.752 22:25:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:44.752 22:25:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:44.752 22:25:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:44.752 22:25:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:44.752 22:25:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:44.752 22:25:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:44.752 22:25:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:44.752 22:25:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:44.752 22:25:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:44.752 22:25:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:44.752 22:25:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:44.752 22:25:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:44.752 22:25:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:44.752 22:25:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:44.752 22:25:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:44.752 22:25:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:44.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:25:44.752 00:25:44.752 --- 10.0.0.2 ping statistics --- 00:25:44.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.752 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:44.752 22:25:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:44.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:44.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:44.752 00:25:44.752 --- 10.0.0.3 ping statistics --- 00:25:44.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.752 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:44.752 22:25:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:44.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:25:44.752 00:25:44.752 --- 10.0.0.1 ping statistics --- 00:25:44.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.752 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:25:44.752 22:25:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.752 22:25:41 -- nvmf/common.sh@421 -- # return 0 00:25:44.752 22:25:41 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:44.752 22:25:41 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:45.011 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:45.011 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:45.011 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:45.270 22:25:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.270 22:25:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:45.270 22:25:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:45.270 22:25:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.270 22:25:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:45.270 22:25:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:45.270 22:25:41 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:45.270 22:25:41 -- target/dif.sh@137 -- # nvmfappstart 00:25:45.270 22:25:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:45.270 22:25:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:45.270 22:25:41 -- common/autotest_common.sh@10 -- # set +x 00:25:45.270 22:25:41 -- nvmf/common.sh@469 -- # nvmfpid=91494 00:25:45.270 22:25:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:45.270 22:25:41 -- nvmf/common.sh@470 -- # waitforlisten 91494 00:25:45.270 22:25:41 -- common/autotest_common.sh@829 -- # '[' -z 91494 ']' 00:25:45.270 22:25:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.270 22:25:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.270 22:25:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.270 22:25:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.270 22:25:41 -- common/autotest_common.sh@10 -- # set +x 00:25:45.270 [2024-11-17 22:25:41.741912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:45.270 [2024-11-17 22:25:41.742020] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.270 [2024-11-17 22:25:41.883357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.529 [2024-11-17 22:25:41.993134] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:45.529 [2024-11-17 22:25:41.993296] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.529 [2024-11-17 22:25:41.993312] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.529 [2024-11-17 22:25:41.993323] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.529 [2024-11-17 22:25:41.993362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.466 22:25:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.466 22:25:42 -- common/autotest_common.sh@862 -- # return 0 00:25:46.466 22:25:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:46.466 22:25:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.466 22:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:46.466 22:25:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.466 22:25:42 -- target/dif.sh@139 -- # create_transport 00:25:46.466 22:25:42 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:46.466 22:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.466 22:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:46.466 [2024-11-17 22:25:42.821573] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.466 22:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.466 22:25:42 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:46.466 22:25:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:46.466 22:25:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:46.466 22:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:46.466 ************************************ 00:25:46.466 START TEST fio_dif_1_default 00:25:46.466 ************************************ 00:25:46.466 22:25:42 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:46.466 22:25:42 -- target/dif.sh@86 -- # create_subsystems 0 00:25:46.466 22:25:42 -- target/dif.sh@28 -- # local sub 00:25:46.466 22:25:42 -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.466 22:25:42 -- target/dif.sh@31 -- # create_subsystem 0 00:25:46.466 22:25:42 -- target/dif.sh@18 -- # local sub_id=0 00:25:46.466 22:25:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:46.466 22:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.466 22:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:46.466 bdev_null0 00:25:46.466 22:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.466 22:25:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:46.466 22:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.466 22:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:46.466 22:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.466 22:25:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:46.466 22:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.466 22:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:46.466 22:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.466 22:25:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.466 22:25:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.466 22:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:46.466 [2024-11-17 22:25:42.869719] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.466 22:25:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.466 22:25:42 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:46.466 22:25:42 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:46.466 22:25:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.466 22:25:42 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.466 22:25:42 -- target/dif.sh@82 -- # gen_fio_conf 00:25:46.466 22:25:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:46.466 22:25:42 -- target/dif.sh@54 -- # local file 00:25:46.466 22:25:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:46.466 22:25:42 -- target/dif.sh@56 -- # cat 00:25:46.466 22:25:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:46.466 22:25:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.466 22:25:42 -- common/autotest_common.sh@1330 -- # shift 00:25:46.466 22:25:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:46.466 22:25:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.466 22:25:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:46.466 22:25:42 -- nvmf/common.sh@520 -- # config=() 00:25:46.466 22:25:42 -- nvmf/common.sh@520 -- # local subsystem config 00:25:46.466 22:25:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:46.466 22:25:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:46.466 { 00:25:46.466 "params": { 00:25:46.466 "name": "Nvme$subsystem", 00:25:46.466 "trtype": "$TEST_TRANSPORT", 00:25:46.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.466 "adrfam": "ipv4", 00:25:46.466 "trsvcid": "$NVMF_PORT", 00:25:46.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.466 "hdgst": ${hdgst:-false}, 00:25:46.466 "ddgst": ${ddgst:-false} 00:25:46.466 }, 00:25:46.466 "method": "bdev_nvme_attach_controller" 00:25:46.466 } 00:25:46.466 EOF 00:25:46.466 )") 00:25:46.466 22:25:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.466 22:25:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:46.466 22:25:42 -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.466 22:25:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:46.466 22:25:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:46.466 22:25:42 -- nvmf/common.sh@542 -- # cat 00:25:46.466 22:25:42 -- nvmf/common.sh@544 -- # jq . 00:25:46.466 22:25:42 -- nvmf/common.sh@545 -- # IFS=, 00:25:46.466 22:25:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:46.466 "params": { 00:25:46.466 "name": "Nvme0", 00:25:46.466 "trtype": "tcp", 00:25:46.466 "traddr": "10.0.0.2", 00:25:46.466 "adrfam": "ipv4", 00:25:46.466 "trsvcid": "4420", 00:25:46.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:46.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:46.466 "hdgst": false, 00:25:46.466 "ddgst": false 00:25:46.466 }, 00:25:46.466 "method": "bdev_nvme_attach_controller" 00:25:46.466 }' 00:25:46.466 22:25:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:46.466 22:25:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:46.466 22:25:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.466 22:25:42 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:46.466 22:25:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.466 22:25:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:46.466 22:25:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:46.466 22:25:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:46.466 22:25:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:46.466 22:25:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.725 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:46.725 fio-3.35 00:25:46.725 Starting 1 thread 00:25:46.984 [2024-11-17 22:25:43.525810] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:46.984 [2024-11-17 22:25:43.525890] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:59.189 00:25:59.189 filename0: (groupid=0, jobs=1): err= 0: pid=91579: Sun Nov 17 22:25:53 2024 00:25:59.189 read: IOPS=2415, BW=9662KiB/s (9894kB/s)(94.5MiB/10012msec) 00:25:59.189 slat (nsec): min=5764, max=36768, avg=6586.85, stdev=1565.56 00:25:59.189 clat (usec): min=344, max=41601, avg=1636.53, stdev=7009.89 00:25:59.189 lat (usec): min=350, max=41609, avg=1643.11, stdev=7009.91 00:25:59.189 clat percentiles (usec): 00:25:59.189 | 1.00th=[ 351], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 363], 00:25:59.189 | 30.00th=[ 371], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 383], 00:25:59.189 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 424], 95.00th=[ 461], 00:25:59.189 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:25:59.189 | 99.99th=[41681] 00:25:59.189 bw ( KiB/s): min= 5760, max=12960, per=100.00%, avg=9672.00, stdev=2017.77, samples=20 00:25:59.189 iops : min= 1440, max= 3240, avg=2418.00, stdev=504.44, samples=20 00:25:59.189 lat (usec) : 500=96.27%, 750=0.60% 00:25:59.189 lat (msec) : 2=0.02%, 10=0.02%, 50=3.09% 00:25:59.189 cpu : usr=90.88%, sys=8.01%, ctx=16, majf=0, minf=9 00:25:59.189 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:59.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.189 issued rwts: total=24184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.189 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:59.189 00:25:59.189 Run status group 0 (all jobs): 00:25:59.189 READ: bw=9662KiB/s (9894kB/s), 9662KiB/s-9662KiB/s (9894kB/s-9894kB/s), io=94.5MiB (99.1MB), run=10012-10012msec 00:25:59.189 22:25:53 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:59.189 22:25:53 -- target/dif.sh@43 -- # local sub 00:25:59.189 22:25:53 -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.189 22:25:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:59.189 22:25:53 -- target/dif.sh@36 -- # local sub_id=0 00:25:59.189 22:25:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:59.189 22:25:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 22:25:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 22:25:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:59.190 22:25:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 22:25:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 00:25:59.190 real 0m11.060s 00:25:59.190 user 0m9.788s 00:25:59.190 sys 0m1.083s 00:25:59.190 22:25:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 ************************************ 00:25:59.190 END TEST fio_dif_1_default 00:25:59.190 ************************************ 00:25:59.190 22:25:53 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:59.190 22:25:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:59.190 22:25:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 ************************************ 00:25:59.190 START TEST fio_dif_1_multi_subsystems 00:25:59.190 ************************************ 00:25:59.190 22:25:53 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:59.190 22:25:53 -- target/dif.sh@92 -- # local files=1 00:25:59.190 22:25:53 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:59.190 22:25:53 -- target/dif.sh@28 -- # local sub 00:25:59.190 22:25:53 -- target/dif.sh@30 -- # for sub in "$@" 00:25:59.190 22:25:53 -- target/dif.sh@31 -- # create_subsystem 0 00:25:59.190 22:25:53 -- target/dif.sh@18 -- # local sub_id=0 00:25:59.190 22:25:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:59.190 22:25:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 bdev_null0 00:25:59.190 22:25:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 22:25:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:59.190 22:25:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 22:25:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 22:25:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:59.190 22:25:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 22:25:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 22:25:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:59.190 22:25:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 [2024-11-17 22:25:53.980732] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.190 22:25:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 22:25:53 -- target/dif.sh@30 -- # for sub in "$@" 00:25:59.190 22:25:53 -- target/dif.sh@31 -- # create_subsystem 1 00:25:59.190 22:25:53 -- target/dif.sh@18 -- # local sub_id=1 00:25:59.190 22:25:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:59.190 22:25:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 bdev_null1 00:25:59.190 22:25:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 22:25:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:59.190 22:25:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:53 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 22:25:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 22:25:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:59.190 22:25:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 22:25:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 22:25:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.190 22:25:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.190 22:25:54 -- common/autotest_common.sh@10 -- # set +x 00:25:59.190 22:25:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.190 22:25:54 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:59.190 22:25:54 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:59.190 22:25:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:59.190 22:25:54 -- nvmf/common.sh@520 -- # config=() 00:25:59.190 22:25:54 -- nvmf/common.sh@520 -- # local subsystem config 00:25:59.190 22:25:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.190 22:25:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.190 22:25:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.190 { 00:25:59.190 "params": { 00:25:59.190 "name": "Nvme$subsystem", 00:25:59.190 "trtype": "$TEST_TRANSPORT", 00:25:59.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.190 "adrfam": "ipv4", 00:25:59.190 "trsvcid": "$NVMF_PORT", 00:25:59.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.190 "hdgst": ${hdgst:-false}, 00:25:59.190 "ddgst": ${ddgst:-false} 00:25:59.190 }, 00:25:59.190 "method": "bdev_nvme_attach_controller" 00:25:59.190 } 00:25:59.190 EOF 00:25:59.190 )") 00:25:59.190 22:25:54 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.190 22:25:54 -- target/dif.sh@82 -- # gen_fio_conf 00:25:59.190 22:25:54 -- target/dif.sh@54 -- # local file 00:25:59.190 22:25:54 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:59.190 22:25:54 -- target/dif.sh@56 -- # cat 00:25:59.190 22:25:54 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:59.190 22:25:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:59.190 22:25:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.190 22:25:54 -- common/autotest_common.sh@1330 -- # shift 00:25:59.190 22:25:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:59.190 22:25:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.190 22:25:54 -- nvmf/common.sh@542 -- # cat 00:25:59.190 22:25:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:59.190 22:25:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.190 22:25:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:59.190 22:25:54 -- target/dif.sh@72 -- # (( file <= files )) 00:25:59.190 22:25:54 -- target/dif.sh@73 -- # cat 00:25:59.190 22:25:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:59.190 22:25:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.190 22:25:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.190 { 00:25:59.190 "params": { 00:25:59.190 "name": "Nvme$subsystem", 00:25:59.190 "trtype": "$TEST_TRANSPORT", 00:25:59.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.190 "adrfam": "ipv4", 00:25:59.190 "trsvcid": "$NVMF_PORT", 00:25:59.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.190 "hdgst": ${hdgst:-false}, 00:25:59.190 "ddgst": ${ddgst:-false} 00:25:59.190 }, 00:25:59.190 "method": "bdev_nvme_attach_controller" 00:25:59.190 } 00:25:59.190 EOF 00:25:59.190 )") 00:25:59.190 22:25:54 -- target/dif.sh@72 -- # (( file++ )) 00:25:59.190 22:25:54 -- target/dif.sh@72 -- # (( file <= files )) 00:25:59.190 22:25:54 -- nvmf/common.sh@542 -- # cat 00:25:59.190 22:25:54 -- nvmf/common.sh@544 -- # jq . 00:25:59.190 22:25:54 -- nvmf/common.sh@545 -- # IFS=, 00:25:59.190 22:25:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:59.190 "params": { 00:25:59.190 "name": "Nvme0", 00:25:59.190 "trtype": "tcp", 00:25:59.190 "traddr": "10.0.0.2", 00:25:59.190 "adrfam": "ipv4", 00:25:59.190 "trsvcid": "4420", 00:25:59.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:59.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:59.190 "hdgst": false, 00:25:59.190 "ddgst": false 00:25:59.190 }, 00:25:59.190 "method": "bdev_nvme_attach_controller" 00:25:59.190 },{ 00:25:59.190 "params": { 00:25:59.190 "name": "Nvme1", 00:25:59.190 "trtype": "tcp", 00:25:59.190 "traddr": "10.0.0.2", 00:25:59.190 "adrfam": "ipv4", 00:25:59.190 "trsvcid": "4420", 00:25:59.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.190 "hdgst": false, 00:25:59.190 "ddgst": false 00:25:59.190 }, 00:25:59.190 "method": "bdev_nvme_attach_controller" 00:25:59.190 }' 00:25:59.190 22:25:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:59.190 22:25:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:59.190 22:25:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.190 22:25:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.190 22:25:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:59.190 22:25:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:59.190 22:25:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:59.190 22:25:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:59.190 22:25:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:59.190 22:25:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.190 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:59.190 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:59.190 fio-3.35 00:25:59.190 Starting 2 threads 00:25:59.190 [2024-11-17 22:25:54.767969] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:59.191 [2024-11-17 22:25:54.768046] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:09.171 00:26:09.171 filename0: (groupid=0, jobs=1): err= 0: pid=91741: Sun Nov 17 22:26:04 2024 00:26:09.171 read: IOPS=203, BW=813KiB/s (833kB/s)(8144KiB/10016msec) 00:26:09.171 slat (nsec): min=6126, max=45857, avg=8760.04, stdev=4463.04 00:26:09.171 clat (usec): min=356, max=41414, avg=19648.71, stdev=20201.98 00:26:09.171 lat (usec): min=363, max=41423, avg=19657.47, stdev=20201.89 00:26:09.171 clat percentiles (usec): 00:26:09.171 | 1.00th=[ 367], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 404], 00:26:09.171 | 30.00th=[ 416], 40.00th=[ 437], 50.00th=[ 490], 60.00th=[40633], 00:26:09.171 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:09.171 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:26:09.171 | 99.99th=[41157] 00:26:09.171 bw ( KiB/s): min= 512, max= 1248, per=48.24%, avg=812.80, stdev=192.39, samples=20 00:26:09.171 iops : min= 128, max= 312, avg=203.20, stdev=48.10, samples=20 00:26:09.171 lat (usec) : 500=50.93%, 750=1.28%, 1000=0.05% 00:26:09.171 lat (msec) : 2=0.20%, 50=47.54% 00:26:09.171 cpu : usr=97.44%, sys=2.13%, ctx=18, majf=0, minf=0 00:26:09.171 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:09.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.171 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.171 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:09.171 filename1: (groupid=0, jobs=1): err= 0: pid=91742: Sun Nov 17 22:26:04 2024 00:26:09.171 read: IOPS=217, BW=871KiB/s (892kB/s)(8736KiB/10028msec) 00:26:09.171 slat (nsec): min=5851, max=60422, avg=8994.76, stdev=4814.79 00:26:09.171 clat (usec): min=344, max=41393, avg=18336.75, stdev=20096.72 00:26:09.171 lat (usec): min=351, max=41402, avg=18345.74, stdev=20096.73 00:26:09.171 clat percentiles (usec): 00:26:09.171 | 1.00th=[ 355], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 392], 00:26:09.171 | 30.00th=[ 408], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[40633], 00:26:09.171 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:09.171 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:26:09.171 | 99.99th=[41157] 00:26:09.171 bw ( KiB/s): min= 608, max= 1216, per=51.80%, avg=872.00, stdev=191.96, samples=20 00:26:09.171 iops : min= 152, max= 304, avg=218.00, stdev=47.99, samples=20 00:26:09.171 lat (usec) : 500=54.49%, 750=1.01% 00:26:09.171 lat (msec) : 2=0.18%, 50=44.32% 00:26:09.171 cpu : usr=97.59%, sys=2.00%, ctx=7, majf=0, minf=0 00:26:09.171 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:09.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.171 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.171 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:09.171 00:26:09.171 Run status group 0 (all jobs): 00:26:09.171 READ: bw=1683KiB/s (1724kB/s), 813KiB/s-871KiB/s (833kB/s-892kB/s), io=16.5MiB (17.3MB), run=10016-10028msec 00:26:09.171 22:26:05 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:09.171 22:26:05 -- target/dif.sh@43 -- # local sub 00:26:09.171 22:26:05 -- target/dif.sh@45 -- # for sub in "$@" 00:26:09.171 22:26:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:09.171 22:26:05 -- target/dif.sh@36 -- # local sub_id=0 00:26:09.171 22:26:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:09.171 22:26:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 22:26:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.171 22:26:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:09.171 22:26:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 22:26:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.171 22:26:05 -- target/dif.sh@45 -- # for sub in "$@" 00:26:09.171 22:26:05 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:09.171 22:26:05 -- target/dif.sh@36 -- # local sub_id=1 00:26:09.171 22:26:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:09.171 22:26:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 22:26:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.171 22:26:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:09.171 22:26:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 22:26:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.171 00:26:09.171 real 0m11.222s 00:26:09.171 user 0m20.358s 00:26:09.171 sys 0m0.723s 00:26:09.171 22:26:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 ************************************ 00:26:09.171 END TEST fio_dif_1_multi_subsystems 00:26:09.171 ************************************ 00:26:09.171 22:26:05 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:09.171 22:26:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:09.171 22:26:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 ************************************ 00:26:09.171 START TEST fio_dif_rand_params 00:26:09.171 ************************************ 00:26:09.171 22:26:05 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:09.171 22:26:05 -- target/dif.sh@100 -- # local NULL_DIF 00:26:09.171 22:26:05 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:09.171 22:26:05 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:09.171 22:26:05 -- target/dif.sh@103 -- # bs=128k 00:26:09.171 22:26:05 -- target/dif.sh@103 -- # numjobs=3 00:26:09.171 22:26:05 -- target/dif.sh@103 -- # iodepth=3 00:26:09.171 22:26:05 -- target/dif.sh@103 -- # runtime=5 00:26:09.171 22:26:05 -- target/dif.sh@105 -- # create_subsystems 0 00:26:09.171 22:26:05 -- target/dif.sh@28 -- # local sub 00:26:09.171 22:26:05 -- target/dif.sh@30 -- # for sub in "$@" 00:26:09.171 22:26:05 -- target/dif.sh@31 -- # create_subsystem 0 00:26:09.171 22:26:05 -- target/dif.sh@18 -- # local sub_id=0 00:26:09.171 22:26:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:09.171 22:26:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 bdev_null0 00:26:09.171 22:26:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.171 22:26:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:09.171 22:26:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 22:26:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.171 22:26:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:09.171 22:26:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 22:26:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.171 22:26:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:09.171 22:26:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.171 22:26:05 -- common/autotest_common.sh@10 -- # set +x 00:26:09.171 [2024-11-17 22:26:05.257226] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.171 22:26:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.171 22:26:05 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:09.171 22:26:05 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:09.171 22:26:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:09.171 22:26:05 -- nvmf/common.sh@520 -- # config=() 00:26:09.171 22:26:05 -- target/dif.sh@82 -- # gen_fio_conf 00:26:09.172 22:26:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.172 22:26:05 -- nvmf/common.sh@520 -- # local subsystem config 00:26:09.172 22:26:05 -- target/dif.sh@54 -- # local file 00:26:09.172 22:26:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:09.172 22:26:05 -- target/dif.sh@56 -- # cat 00:26:09.172 22:26:05 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.172 22:26:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:09.172 { 00:26:09.172 "params": { 00:26:09.172 "name": "Nvme$subsystem", 00:26:09.172 "trtype": "$TEST_TRANSPORT", 00:26:09.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:09.172 "adrfam": "ipv4", 00:26:09.172 "trsvcid": "$NVMF_PORT", 00:26:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:09.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:09.172 "hdgst": ${hdgst:-false}, 00:26:09.172 "ddgst": ${ddgst:-false} 00:26:09.172 }, 00:26:09.172 "method": "bdev_nvme_attach_controller" 00:26:09.172 } 00:26:09.172 EOF 00:26:09.172 )") 00:26:09.172 22:26:05 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:09.172 22:26:05 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:09.172 22:26:05 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:09.172 22:26:05 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.172 22:26:05 -- common/autotest_common.sh@1330 -- # shift 00:26:09.172 22:26:05 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:09.172 22:26:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.172 22:26:05 -- nvmf/common.sh@542 -- # cat 00:26:09.172 22:26:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:09.172 22:26:05 -- target/dif.sh@72 -- # (( file <= files )) 00:26:09.172 22:26:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.172 22:26:05 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:09.172 22:26:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:09.172 22:26:05 -- nvmf/common.sh@544 -- # jq . 00:26:09.172 22:26:05 -- nvmf/common.sh@545 -- # IFS=, 00:26:09.172 22:26:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:09.172 "params": { 00:26:09.172 "name": "Nvme0", 00:26:09.172 "trtype": "tcp", 00:26:09.172 "traddr": "10.0.0.2", 00:26:09.172 "adrfam": "ipv4", 00:26:09.172 "trsvcid": "4420", 00:26:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:09.172 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:09.172 "hdgst": false, 00:26:09.172 "ddgst": false 00:26:09.172 }, 00:26:09.172 "method": "bdev_nvme_attach_controller" 00:26:09.172 }' 00:26:09.172 22:26:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:09.172 22:26:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:09.172 22:26:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:09.172 22:26:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:09.172 22:26:05 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:09.172 22:26:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:09.172 22:26:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:09.172 22:26:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:09.172 22:26:05 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:09.172 22:26:05 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:09.172 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:09.172 ... 00:26:09.172 fio-3.35 00:26:09.172 Starting 3 threads 00:26:09.431 [2024-11-17 22:26:05.946285] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:09.431 [2024-11-17 22:26:05.946358] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:14.703 00:26:14.703 filename0: (groupid=0, jobs=1): err= 0: pid=91898: Sun Nov 17 22:26:11 2024 00:26:14.703 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(170MiB/5006msec) 00:26:14.703 slat (nsec): min=5898, max=71422, avg=12289.74, stdev=6020.46 00:26:14.703 clat (usec): min=3112, max=51192, avg=11015.32, stdev=11144.10 00:26:14.703 lat (usec): min=3126, max=51222, avg=11027.61, stdev=11144.22 00:26:14.703 clat percentiles (usec): 00:26:14.703 | 1.00th=[ 3916], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6587], 00:26:14.703 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8455], 00:26:14.703 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[48497], 00:26:14.703 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:26:14.703 | 99.99th=[51119] 00:26:14.703 bw ( KiB/s): min=24576, max=44800, per=31.44%, avg=34816.00, stdev=7308.84, samples=10 00:26:14.703 iops : min= 192, max= 350, avg=272.00, stdev=57.10, samples=10 00:26:14.703 lat (msec) : 4=1.10%, 10=89.49%, 20=1.47%, 50=6.69%, 100=1.25% 00:26:14.703 cpu : usr=93.77%, sys=4.42%, ctx=9, majf=0, minf=0 00:26:14.703 IO depths : 1=4.5%, 2=95.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.703 issued rwts: total=1361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.703 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:14.703 filename0: (groupid=0, jobs=1): err= 0: pid=91899: Sun Nov 17 22:26:11 2024 00:26:14.703 read: IOPS=235, BW=29.4MiB/s (30.8MB/s)(147MiB/5002msec) 00:26:14.703 slat (nsec): min=6214, max=52431, avg=15237.51, stdev=7165.45 00:26:14.703 clat (usec): min=3353, max=52581, avg=12739.33, stdev=12039.08 00:26:14.703 lat (usec): min=3360, max=52602, avg=12754.57, stdev=12039.13 00:26:14.703 clat percentiles (usec): 00:26:14.703 | 1.00th=[ 3523], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 6587], 00:26:14.703 | 30.00th=[ 6980], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10421], 00:26:14.703 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12387], 95.00th=[49546], 00:26:14.703 | 99.00th=[51643], 99.50th=[51643], 99.90th=[52167], 99.95th=[52691], 00:26:14.703 | 99.99th=[52691] 00:26:14.703 bw ( KiB/s): min=21760, max=39424, per=26.64%, avg=29496.89, stdev=5817.71, samples=9 00:26:14.703 iops : min= 170, max= 308, avg=230.44, stdev=45.45, samples=9 00:26:14.703 lat (msec) : 4=1.53%, 10=47.02%, 20=42.01%, 50=5.10%, 100=4.34% 00:26:14.703 cpu : usr=95.42%, sys=3.32%, ctx=26, majf=0, minf=0 00:26:14.703 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.703 issued rwts: total=1176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.703 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:14.703 filename0: (groupid=0, jobs=1): err= 0: pid=91900: Sun Nov 17 22:26:11 2024 00:26:14.703 read: IOPS=358, BW=44.8MiB/s (47.0MB/s)(224MiB/5003msec) 00:26:14.703 slat (nsec): min=5894, max=63711, avg=9960.20, stdev=5933.98 00:26:14.703 clat (usec): min=3387, max=47803, avg=8341.31, stdev=3761.87 00:26:14.703 lat (usec): min=3396, max=47809, avg=8351.27, stdev=3762.33 00:26:14.703 clat percentiles (usec): 00:26:14.703 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 4047], 00:26:14.703 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8717], 00:26:14.704 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12125], 95.00th=[12387], 00:26:14.704 | 99.00th=[13173], 99.50th=[15664], 99.90th=[46400], 99.95th=[47973], 00:26:14.704 | 99.99th=[47973] 00:26:14.704 bw ( KiB/s): min=40704, max=52992, per=42.00%, avg=46506.67, stdev=3584.00, samples=9 00:26:14.704 iops : min= 318, max= 414, avg=363.33, stdev=28.00, samples=9 00:26:14.704 lat (msec) : 4=19.57%, 10=45.48%, 20=34.62%, 50=0.33% 00:26:14.704 cpu : usr=92.36%, sys=5.68%, ctx=9, majf=0, minf=0 00:26:14.704 IO depths : 1=30.3%, 2=69.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.704 issued rwts: total=1794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:14.704 00:26:14.704 Run status group 0 (all jobs): 00:26:14.704 READ: bw=108MiB/s (113MB/s), 29.4MiB/s-44.8MiB/s (30.8MB/s-47.0MB/s), io=541MiB (568MB), run=5002-5006msec 00:26:14.704 22:26:11 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:14.704 22:26:11 -- target/dif.sh@43 -- # local sub 00:26:14.704 22:26:11 -- target/dif.sh@45 -- # for sub in "$@" 00:26:14.704 22:26:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:14.704 22:26:11 -- target/dif.sh@36 -- # local sub_id=0 00:26:14.704 22:26:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:14.704 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.704 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.704 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.704 22:26:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:14.704 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.704 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:14.963 22:26:11 -- target/dif.sh@109 -- # bs=4k 00:26:14.963 22:26:11 -- target/dif.sh@109 -- # numjobs=8 00:26:14.963 22:26:11 -- target/dif.sh@109 -- # iodepth=16 00:26:14.963 22:26:11 -- target/dif.sh@109 -- # runtime= 00:26:14.963 22:26:11 -- target/dif.sh@109 -- # files=2 00:26:14.963 22:26:11 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:14.963 22:26:11 -- target/dif.sh@28 -- # local sub 00:26:14.963 22:26:11 -- target/dif.sh@30 -- # for sub in "$@" 00:26:14.963 22:26:11 -- target/dif.sh@31 -- # create_subsystem 0 00:26:14.963 22:26:11 -- target/dif.sh@18 -- # local sub_id=0 00:26:14.963 22:26:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 bdev_null0 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 [2024-11-17 22:26:11.353889] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@30 -- # for sub in "$@" 00:26:14.963 22:26:11 -- target/dif.sh@31 -- # create_subsystem 1 00:26:14.963 22:26:11 -- target/dif.sh@18 -- # local sub_id=1 00:26:14.963 22:26:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 bdev_null1 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@30 -- # for sub in "$@" 00:26:14.963 22:26:11 -- target/dif.sh@31 -- # create_subsystem 2 00:26:14.963 22:26:11 -- target/dif.sh@18 -- # local sub_id=2 00:26:14.963 22:26:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 bdev_null2 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:14.963 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.963 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.963 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 22:26:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:14.964 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.964 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.964 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.964 22:26:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:14.964 22:26:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.964 22:26:11 -- common/autotest_common.sh@10 -- # set +x 00:26:14.964 22:26:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.964 22:26:11 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:14.964 22:26:11 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:14.964 22:26:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:14.964 22:26:11 -- nvmf/common.sh@520 -- # config=() 00:26:14.964 22:26:11 -- nvmf/common.sh@520 -- # local subsystem config 00:26:14.964 22:26:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.964 22:26:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:14.964 22:26:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:14.964 { 00:26:14.964 "params": { 00:26:14.964 "name": "Nvme$subsystem", 00:26:14.964 "trtype": "$TEST_TRANSPORT", 00:26:14.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.964 "adrfam": "ipv4", 00:26:14.964 "trsvcid": "$NVMF_PORT", 00:26:14.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.964 "hdgst": ${hdgst:-false}, 00:26:14.964 "ddgst": ${ddgst:-false} 00:26:14.964 }, 00:26:14.964 "method": "bdev_nvme_attach_controller" 00:26:14.964 } 00:26:14.964 EOF 00:26:14.964 )") 00:26:14.964 22:26:11 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:14.964 22:26:11 -- target/dif.sh@82 -- # gen_fio_conf 00:26:14.964 22:26:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:14.964 22:26:11 -- target/dif.sh@54 -- # local file 00:26:14.964 22:26:11 -- target/dif.sh@56 -- # cat 00:26:14.964 22:26:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:14.964 22:26:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:14.964 22:26:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:14.964 22:26:11 -- common/autotest_common.sh@1330 -- # shift 00:26:14.964 22:26:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:14.964 22:26:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.964 22:26:11 -- nvmf/common.sh@542 -- # cat 00:26:14.964 22:26:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:14.964 22:26:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:14.964 22:26:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:14.964 22:26:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:14.964 22:26:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:14.964 { 00:26:14.964 "params": { 00:26:14.964 "name": "Nvme$subsystem", 00:26:14.964 "trtype": "$TEST_TRANSPORT", 00:26:14.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.964 "adrfam": "ipv4", 00:26:14.964 "trsvcid": "$NVMF_PORT", 00:26:14.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.964 "hdgst": ${hdgst:-false}, 00:26:14.964 "ddgst": ${ddgst:-false} 00:26:14.964 }, 00:26:14.964 "method": "bdev_nvme_attach_controller" 00:26:14.964 } 00:26:14.964 EOF 00:26:14.964 )") 00:26:14.964 22:26:11 -- nvmf/common.sh@542 -- # cat 00:26:14.964 22:26:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:14.964 22:26:11 -- target/dif.sh@72 -- # (( file <= files )) 00:26:14.964 22:26:11 -- target/dif.sh@73 -- # cat 00:26:14.964 22:26:11 -- target/dif.sh@72 -- # (( file++ )) 00:26:14.964 22:26:11 -- target/dif.sh@72 -- # (( file <= files )) 00:26:14.964 22:26:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:14.964 22:26:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:14.964 { 00:26:14.964 "params": { 00:26:14.964 "name": "Nvme$subsystem", 00:26:14.964 "trtype": "$TEST_TRANSPORT", 00:26:14.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.964 "adrfam": "ipv4", 00:26:14.964 "trsvcid": "$NVMF_PORT", 00:26:14.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.964 "hdgst": ${hdgst:-false}, 00:26:14.964 "ddgst": ${ddgst:-false} 00:26:14.964 }, 00:26:14.964 "method": "bdev_nvme_attach_controller" 00:26:14.964 } 00:26:14.964 EOF 00:26:14.964 )") 00:26:14.964 22:26:11 -- target/dif.sh@73 -- # cat 00:26:14.964 22:26:11 -- nvmf/common.sh@542 -- # cat 00:26:14.964 22:26:11 -- target/dif.sh@72 -- # (( file++ )) 00:26:14.964 22:26:11 -- target/dif.sh@72 -- # (( file <= files )) 00:26:14.964 22:26:11 -- nvmf/common.sh@544 -- # jq . 00:26:14.964 22:26:11 -- nvmf/common.sh@545 -- # IFS=, 00:26:14.964 22:26:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:14.964 "params": { 00:26:14.964 "name": "Nvme0", 00:26:14.964 "trtype": "tcp", 00:26:14.964 "traddr": "10.0.0.2", 00:26:14.964 "adrfam": "ipv4", 00:26:14.964 "trsvcid": "4420", 00:26:14.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:14.964 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:14.964 "hdgst": false, 00:26:14.964 "ddgst": false 00:26:14.964 }, 00:26:14.964 "method": "bdev_nvme_attach_controller" 00:26:14.964 },{ 00:26:14.964 "params": { 00:26:14.964 "name": "Nvme1", 00:26:14.964 "trtype": "tcp", 00:26:14.964 "traddr": "10.0.0.2", 00:26:14.964 "adrfam": "ipv4", 00:26:14.964 "trsvcid": "4420", 00:26:14.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.964 "hdgst": false, 00:26:14.964 "ddgst": false 00:26:14.964 }, 00:26:14.964 "method": "bdev_nvme_attach_controller" 00:26:14.964 },{ 00:26:14.964 "params": { 00:26:14.964 "name": "Nvme2", 00:26:14.964 "trtype": "tcp", 00:26:14.964 "traddr": "10.0.0.2", 00:26:14.964 "adrfam": "ipv4", 00:26:14.964 "trsvcid": "4420", 00:26:14.964 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:14.964 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:14.964 "hdgst": false, 00:26:14.964 "ddgst": false 00:26:14.964 }, 00:26:14.964 "method": "bdev_nvme_attach_controller" 00:26:14.964 }' 00:26:14.964 22:26:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:14.964 22:26:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:14.964 22:26:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:14.964 22:26:11 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:14.964 22:26:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:14.964 22:26:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:14.964 22:26:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:14.964 22:26:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:14.964 22:26:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:14.964 22:26:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.223 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:15.223 ... 00:26:15.223 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:15.223 ... 00:26:15.223 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:15.223 ... 00:26:15.223 fio-3.35 00:26:15.223 Starting 24 threads 00:26:15.829 [2024-11-17 22:26:12.299633] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:15.829 [2024-11-17 22:26:12.299783] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:28.051 00:26:28.051 filename0: (groupid=0, jobs=1): err= 0: pid=91996: Sun Nov 17 22:26:22 2024 00:26:28.051 read: IOPS=271, BW=1087KiB/s (1113kB/s)(10.7MiB/10039msec) 00:26:28.051 slat (nsec): min=3839, max=52277, avg=11588.48, stdev=6988.53 00:26:28.051 clat (msec): min=6, max=131, avg=58.79, stdev=20.56 00:26:28.051 lat (msec): min=6, max=131, avg=58.80, stdev=20.56 00:26:28.051 clat percentiles (msec): 00:26:28.051 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:26:28.051 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 00:26:28.051 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 86], 95.00th=[ 96], 00:26:28.051 | 99.00th=[ 109], 99.50th=[ 115], 99.90th=[ 128], 99.95th=[ 132], 00:26:28.051 | 99.99th=[ 132] 00:26:28.051 bw ( KiB/s): min= 736, max= 1536, per=4.35%, avg=1084.40, stdev=215.47, samples=20 00:26:28.051 iops : min= 184, max= 384, avg=271.10, stdev=53.87, samples=20 00:26:28.051 lat (msec) : 10=1.17%, 50=37.84%, 100=57.65%, 250=3.34% 00:26:28.051 cpu : usr=34.76%, sys=0.59%, ctx=1286, majf=0, minf=9 00:26:28.051 IO depths : 1=0.8%, 2=1.9%, 4=9.3%, 8=75.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:28.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.051 complete : 0=0.0%, 4=89.7%, 8=5.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.051 issued rwts: total=2727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.051 filename0: (groupid=0, jobs=1): err= 0: pid=91997: Sun Nov 17 22:26:22 2024 00:26:28.051 read: IOPS=257, BW=1030KiB/s (1054kB/s)(10.1MiB/10030msec) 00:26:28.051 slat (usec): min=3, max=8040, avg=22.65, stdev=261.88 00:26:28.051 clat (msec): min=27, max=143, avg=61.87, stdev=17.48 00:26:28.051 lat (msec): min=27, max=143, avg=61.89, stdev=17.48 00:26:28.051 clat percentiles (msec): 00:26:28.051 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 48], 00:26:28.051 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 63], 00:26:28.051 | 70.00th=[ 68], 80.00th=[ 75], 90.00th=[ 86], 95.00th=[ 96], 00:26:28.051 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 144], 99.95th=[ 144], 00:26:28.051 | 99.99th=[ 144] 00:26:28.051 bw ( KiB/s): min= 816, max= 1495, per=4.13%, avg=1029.65, stdev=163.54, samples=20 00:26:28.051 iops : min= 204, max= 373, avg=257.35, stdev=40.78, samples=20 00:26:28.051 lat (msec) : 50=24.63%, 100=72.35%, 250=3.02% 00:26:28.051 cpu : usr=41.80%, sys=0.62%, ctx=1295, majf=0, minf=9 00:26:28.051 IO depths : 1=1.7%, 2=3.8%, 4=12.1%, 8=71.0%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:28.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.051 complete : 0=0.0%, 4=90.6%, 8=4.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.051 issued rwts: total=2582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.051 filename0: (groupid=0, jobs=1): err= 0: pid=91998: Sun Nov 17 22:26:22 2024 00:26:28.051 read: IOPS=240, BW=961KiB/s (984kB/s)(9624KiB/10017msec) 00:26:28.051 slat (usec): min=3, max=5031, avg=20.51, stdev=190.21 00:26:28.051 clat (msec): min=23, max=125, avg=66.41, stdev=18.82 00:26:28.051 lat (msec): min=23, max=125, avg=66.43, stdev=18.82 00:26:28.051 clat percentiles (msec): 00:26:28.051 | 1.00th=[ 32], 5.00th=[ 38], 10.00th=[ 43], 20.00th=[ 53], 00:26:28.051 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 72], 00:26:28.051 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 101], 00:26:28.051 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 127], 99.95th=[ 127], 00:26:28.051 | 99.99th=[ 127] 00:26:28.051 bw ( KiB/s): min= 640, max= 1328, per=3.82%, avg=952.42, stdev=173.04, samples=19 00:26:28.051 iops : min= 160, max= 332, avg=238.11, stdev=43.26, samples=19 00:26:28.051 lat (msec) : 50=16.92%, 100=78.22%, 250=4.86% 00:26:28.051 cpu : usr=44.42%, sys=0.59%, ctx=1319, majf=0, minf=9 00:26:28.051 IO depths : 1=3.6%, 2=7.7%, 4=18.5%, 8=60.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:28.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.051 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.051 issued rwts: total=2406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.051 filename0: (groupid=0, jobs=1): err= 0: pid=91999: Sun Nov 17 22:26:22 2024 00:26:28.051 read: IOPS=234, BW=939KiB/s (961kB/s)(9400KiB/10012msec) 00:26:28.051 slat (usec): min=3, max=8041, avg=22.85, stdev=286.60 00:26:28.051 clat (msec): min=33, max=143, avg=67.99, stdev=21.36 00:26:28.051 lat (msec): min=33, max=143, avg=68.01, stdev=21.36 00:26:28.051 clat percentiles (msec): 00:26:28.051 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 51], 00:26:28.051 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 71], 00:26:28.051 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:26:28.051 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:26:28.051 | 99.99th=[ 144] 00:26:28.051 bw ( KiB/s): min= 640, max= 1280, per=3.74%, avg=933.11, stdev=155.88, samples=19 00:26:28.051 iops : min= 160, max= 320, avg=233.26, stdev=38.97, samples=19 00:26:28.051 lat (msec) : 50=19.96%, 100=70.94%, 250=9.11% 00:26:28.051 cpu : usr=34.04%, sys=0.57%, ctx=941, majf=0, minf=9 00:26:28.051 IO depths : 1=1.2%, 2=3.0%, 4=11.7%, 8=71.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:28.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.051 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.051 issued rwts: total=2350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.051 filename0: (groupid=0, jobs=1): err= 0: pid=92000: Sun Nov 17 22:26:22 2024 00:26:28.051 read: IOPS=247, BW=989KiB/s (1012kB/s)(9904KiB/10017msec) 00:26:28.051 slat (usec): min=4, max=4004, avg=15.57, stdev=110.52 00:26:28.051 clat (msec): min=18, max=140, avg=64.59, stdev=17.11 00:26:28.051 lat (msec): min=18, max=140, avg=64.61, stdev=17.11 00:26:28.051 clat percentiles (msec): 00:26:28.051 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 50], 00:26:28.051 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 69], 00:26:28.052 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 93], 00:26:28.052 | 99.00th=[ 118], 99.50th=[ 127], 99.90th=[ 140], 99.95th=[ 140], 00:26:28.052 | 99.99th=[ 140] 00:26:28.052 bw ( KiB/s): min= 768, max= 1200, per=3.91%, avg=975.63, stdev=153.25, samples=19 00:26:28.052 iops : min= 192, max= 300, avg=243.89, stdev=38.32, samples=19 00:26:28.052 lat (msec) : 20=0.40%, 50=20.64%, 100=76.86%, 250=2.10% 00:26:28.052 cpu : usr=40.13%, sys=0.57%, ctx=1149, majf=0, minf=9 00:26:28.052 IO depths : 1=1.8%, 2=4.2%, 4=12.7%, 8=69.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:28.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 issued rwts: total=2476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.052 filename0: (groupid=0, jobs=1): err= 0: pid=92001: Sun Nov 17 22:26:22 2024 00:26:28.052 read: IOPS=235, BW=941KiB/s (963kB/s)(9416KiB/10009msec) 00:26:28.052 slat (usec): min=4, max=8033, avg=30.97, stdev=378.23 00:26:28.052 clat (msec): min=25, max=144, avg=67.84, stdev=20.59 00:26:28.052 lat (msec): min=25, max=144, avg=67.87, stdev=20.59 00:26:28.052 clat percentiles (msec): 00:26:28.052 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 49], 00:26:28.052 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 72], 00:26:28.052 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:26:28.052 | 99.00th=[ 128], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:26:28.052 | 99.99th=[ 144] 00:26:28.052 bw ( KiB/s): min= 688, max= 1152, per=3.76%, avg=937.26, stdev=135.77, samples=19 00:26:28.052 iops : min= 172, max= 288, avg=234.32, stdev=33.94, samples=19 00:26:28.052 lat (msec) : 50=23.28%, 100=69.03%, 250=7.69% 00:26:28.052 cpu : usr=32.75%, sys=0.42%, ctx=876, majf=0, minf=9 00:26:28.052 IO depths : 1=1.6%, 2=3.7%, 4=12.0%, 8=70.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:28.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 issued rwts: total=2354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.052 filename0: (groupid=0, jobs=1): err= 0: pid=92002: Sun Nov 17 22:26:22 2024 00:26:28.052 read: IOPS=239, BW=959KiB/s (982kB/s)(9596KiB/10011msec) 00:26:28.052 slat (usec): min=4, max=8027, avg=24.55, stdev=294.69 00:26:28.052 clat (msec): min=27, max=143, avg=66.64, stdev=18.67 00:26:28.052 lat (msec): min=27, max=143, avg=66.67, stdev=18.67 00:26:28.052 clat percentiles (msec): 00:26:28.052 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 49], 00:26:28.052 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:26:28.052 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 100], 00:26:28.052 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:26:28.052 | 99.99th=[ 144] 00:26:28.052 bw ( KiB/s): min= 640, max= 1152, per=3.81%, avg=949.37, stdev=132.28, samples=19 00:26:28.052 iops : min= 160, max= 288, avg=237.32, stdev=33.09, samples=19 00:26:28.052 lat (msec) : 50=22.34%, 100=72.70%, 250=4.96% 00:26:28.052 cpu : usr=35.76%, sys=0.46%, ctx=982, majf=0, minf=9 00:26:28.052 IO depths : 1=1.9%, 2=4.1%, 4=12.5%, 8=70.1%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:28.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 issued rwts: total=2399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.052 filename0: (groupid=0, jobs=1): err= 0: pid=92003: Sun Nov 17 22:26:22 2024 00:26:28.052 read: IOPS=240, BW=963KiB/s (986kB/s)(9632KiB/10002msec) 00:26:28.052 slat (usec): min=3, max=4036, avg=16.87, stdev=141.77 00:26:28.052 clat (msec): min=10, max=134, avg=66.34, stdev=19.22 00:26:28.052 lat (msec): min=10, max=134, avg=66.35, stdev=19.22 00:26:28.052 clat percentiles (msec): 00:26:28.052 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 52], 00:26:28.052 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 70], 00:26:28.052 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 106], 00:26:28.052 | 99.00th=[ 122], 99.50th=[ 134], 99.90th=[ 136], 99.95th=[ 136], 00:26:28.052 | 99.99th=[ 136] 00:26:28.052 bw ( KiB/s): min= 624, max= 1200, per=3.79%, avg=946.32, stdev=155.36, samples=19 00:26:28.052 iops : min= 156, max= 300, avg=236.58, stdev=38.84, samples=19 00:26:28.052 lat (msec) : 20=0.66%, 50=15.82%, 100=77.41%, 250=6.10% 00:26:28.052 cpu : usr=48.77%, sys=0.81%, ctx=1233, majf=0, minf=9 00:26:28.052 IO depths : 1=2.9%, 2=6.4%, 4=16.2%, 8=63.9%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:28.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 complete : 0=0.0%, 4=92.0%, 8=3.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 issued rwts: total=2408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.052 filename1: (groupid=0, jobs=1): err= 0: pid=92004: Sun Nov 17 22:26:22 2024 00:26:28.052 read: IOPS=267, BW=1070KiB/s (1095kB/s)(10.5MiB/10025msec) 00:26:28.052 slat (usec): min=3, max=4030, avg=15.14, stdev=109.91 00:26:28.052 clat (msec): min=21, max=122, avg=59.73, stdev=19.45 00:26:28.052 lat (msec): min=21, max=122, avg=59.75, stdev=19.45 00:26:28.052 clat percentiles (msec): 00:26:28.052 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 42], 00:26:28.052 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 62], 00:26:28.052 | 70.00th=[ 68], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 97], 00:26:28.052 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 124], 00:26:28.052 | 99.99th=[ 124] 00:26:28.052 bw ( KiB/s): min= 680, max= 1456, per=4.27%, avg=1065.90, stdev=222.27, samples=20 00:26:28.052 iops : min= 170, max= 364, avg=266.45, stdev=55.56, samples=20 00:26:28.052 lat (msec) : 50=34.54%, 100=62.14%, 250=3.32% 00:26:28.052 cpu : usr=40.62%, sys=0.52%, ctx=1281, majf=0, minf=9 00:26:28.052 IO depths : 1=0.5%, 2=1.0%, 4=7.8%, 8=76.8%, 16=13.8%, 32=0.0%, >=64=0.0% 00:26:28.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 complete : 0=0.0%, 4=89.7%, 8=6.3%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 issued rwts: total=2681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.052 filename1: (groupid=0, jobs=1): err= 0: pid=92005: Sun Nov 17 22:26:22 2024 00:26:28.052 read: IOPS=280, BW=1121KiB/s (1148kB/s)(11.0MiB/10035msec) 00:26:28.052 slat (usec): min=4, max=8030, avg=20.30, stdev=251.23 00:26:28.052 clat (usec): min=727, max=144607, avg=56857.39, stdev=22310.33 00:26:28.052 lat (usec): min=737, max=144620, avg=56877.68, stdev=22313.62 00:26:28.052 clat percentiles (usec): 00:26:28.052 | 1.00th=[ 1500], 5.00th=[ 19006], 10.00th=[ 34866], 20.00th=[ 40633], 00:26:28.052 | 30.00th=[ 46924], 40.00th=[ 50594], 50.00th=[ 55837], 60.00th=[ 59507], 00:26:28.052 | 70.00th=[ 64750], 80.00th=[ 71828], 90.00th=[ 84411], 95.00th=[ 94897], 00:26:28.052 | 99.00th=[121111], 99.50th=[122160], 99.90th=[143655], 99.95th=[143655], 00:26:28.052 | 99.99th=[143655] 00:26:28.052 bw ( KiB/s): min= 768, max= 2416, per=4.48%, avg=1118.80, stdev=361.04, samples=20 00:26:28.052 iops : min= 192, max= 604, avg=279.70, stdev=90.26, samples=20 00:26:28.052 lat (usec) : 750=0.07% 00:26:28.052 lat (msec) : 2=2.03%, 4=1.56%, 10=0.89%, 20=0.57%, 50=34.80% 00:26:28.052 lat (msec) : 100=56.52%, 250=3.55% 00:26:28.052 cpu : usr=35.06%, sys=0.80%, ctx=1297, majf=0, minf=10 00:26:28.052 IO depths : 1=1.6%, 2=4.0%, 4=12.7%, 8=70.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:28.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 issued rwts: total=2813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.052 filename1: (groupid=0, jobs=1): err= 0: pid=92006: Sun Nov 17 22:26:22 2024 00:26:28.052 read: IOPS=254, BW=1018KiB/s (1043kB/s)(9.96MiB/10019msec) 00:26:28.052 slat (usec): min=4, max=8057, avg=15.72, stdev=159.49 00:26:28.052 clat (msec): min=22, max=134, avg=62.69, stdev=18.80 00:26:28.052 lat (msec): min=22, max=134, avg=62.70, stdev=18.81 00:26:28.052 clat percentiles (msec): 00:26:28.052 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:26:28.052 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 66], 00:26:28.052 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 96], 00:26:28.052 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:26:28.052 | 99.99th=[ 136] 00:26:28.052 bw ( KiB/s): min= 712, max= 1280, per=4.06%, avg=1013.47, stdev=151.79, samples=19 00:26:28.052 iops : min= 178, max= 320, avg=253.37, stdev=37.95, samples=19 00:26:28.052 lat (msec) : 50=30.26%, 100=65.15%, 250=4.59% 00:26:28.052 cpu : usr=34.13%, sys=0.51%, ctx=944, majf=0, minf=9 00:26:28.052 IO depths : 1=1.1%, 2=2.5%, 4=9.1%, 8=74.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:28.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.052 issued rwts: total=2551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.052 filename1: (groupid=0, jobs=1): err= 0: pid=92007: Sun Nov 17 22:26:22 2024 00:26:28.052 read: IOPS=252, BW=1010KiB/s (1034kB/s)(9.88MiB/10023msec) 00:26:28.052 slat (usec): min=4, max=8046, avg=21.53, stdev=252.26 00:26:28.052 clat (msec): min=24, max=144, avg=63.20, stdev=19.84 00:26:28.052 lat (msec): min=24, max=144, avg=63.23, stdev=19.84 00:26:28.052 clat percentiles (msec): 00:26:28.052 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 46], 00:26:28.052 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:26:28.052 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 96], 00:26:28.052 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 00:26:28.052 | 99.99th=[ 144] 00:26:28.052 bw ( KiB/s): min= 728, max= 1504, per=4.04%, avg=1007.45, stdev=191.54, samples=20 00:26:28.052 iops : min= 182, max= 376, avg=251.85, stdev=47.88, samples=20 00:26:28.052 lat (msec) : 50=27.63%, 100=68.18%, 250=4.19% 00:26:28.052 cpu : usr=39.28%, sys=0.39%, ctx=1079, majf=0, minf=9 00:26:28.053 IO depths : 1=1.3%, 2=3.1%, 4=10.3%, 8=72.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:28.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 complete : 0=0.0%, 4=90.3%, 8=5.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 issued rwts: total=2530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.053 filename1: (groupid=0, jobs=1): err= 0: pid=92008: Sun Nov 17 22:26:22 2024 00:26:28.053 read: IOPS=302, BW=1208KiB/s (1237kB/s)(11.8MiB/10033msec) 00:26:28.053 slat (usec): min=4, max=8018, avg=14.18, stdev=145.62 00:26:28.053 clat (msec): min=3, max=119, avg=52.80, stdev=17.71 00:26:28.053 lat (msec): min=3, max=119, avg=52.82, stdev=17.71 00:26:28.053 clat percentiles (msec): 00:26:28.053 | 1.00th=[ 10], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 38], 00:26:28.053 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 56], 00:26:28.053 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 74], 95.00th=[ 87], 00:26:28.053 | 99.00th=[ 108], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 121], 00:26:28.053 | 99.99th=[ 121] 00:26:28.053 bw ( KiB/s): min= 864, max= 1795, per=4.83%, avg=1205.75, stdev=209.78, samples=20 00:26:28.053 iops : min= 216, max= 448, avg=301.40, stdev=52.34, samples=20 00:26:28.053 lat (msec) : 4=0.53%, 10=0.53%, 20=0.53%, 50=49.80%, 100=47.43% 00:26:28.053 lat (msec) : 250=1.19% 00:26:28.053 cpu : usr=35.89%, sys=0.71%, ctx=1035, majf=0, minf=9 00:26:28.053 IO depths : 1=0.1%, 2=0.3%, 4=4.5%, 8=80.7%, 16=14.4%, 32=0.0%, >=64=0.0% 00:26:28.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 complete : 0=0.0%, 4=88.9%, 8=7.6%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 issued rwts: total=3030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.053 filename1: (groupid=0, jobs=1): err= 0: pid=92009: Sun Nov 17 22:26:22 2024 00:26:28.053 read: IOPS=253, BW=1016KiB/s (1040kB/s)(9.94MiB/10020msec) 00:26:28.053 slat (usec): min=4, max=8028, avg=25.03, stdev=275.15 00:26:28.053 clat (msec): min=22, max=117, avg=62.85, stdev=17.37 00:26:28.053 lat (msec): min=22, max=117, avg=62.88, stdev=17.36 00:26:28.053 clat percentiles (msec): 00:26:28.053 | 1.00th=[ 30], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 48], 00:26:28.053 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:26:28.053 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 95], 00:26:28.053 | 99.00th=[ 104], 99.50th=[ 112], 99.90th=[ 117], 99.95th=[ 118], 00:26:28.053 | 99.99th=[ 118] 00:26:28.053 bw ( KiB/s): min= 768, max= 1424, per=4.05%, avg=1010.58, stdev=170.18, samples=19 00:26:28.053 iops : min= 192, max= 356, avg=252.63, stdev=42.55, samples=19 00:26:28.053 lat (msec) : 50=23.98%, 100=73.82%, 250=2.20% 00:26:28.053 cpu : usr=41.72%, sys=0.66%, ctx=1193, majf=0, minf=9 00:26:28.053 IO depths : 1=1.6%, 2=3.6%, 4=12.1%, 8=70.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:28.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 issued rwts: total=2544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.053 filename1: (groupid=0, jobs=1): err= 0: pid=92010: Sun Nov 17 22:26:22 2024 00:26:28.053 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10027msec) 00:26:28.053 slat (usec): min=3, max=5022, avg=19.35, stdev=180.19 00:26:28.053 clat (msec): min=27, max=122, avg=61.24, stdev=17.64 00:26:28.053 lat (msec): min=27, max=122, avg=61.26, stdev=17.63 00:26:28.053 clat percentiles (msec): 00:26:28.053 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:26:28.053 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 62], 00:26:28.053 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 94], 00:26:28.053 | 99.00th=[ 109], 99.50th=[ 116], 99.90th=[ 123], 99.95th=[ 123], 00:26:28.053 | 99.99th=[ 123] 00:26:28.053 bw ( KiB/s): min= 768, max= 1328, per=4.16%, avg=1038.85, stdev=161.50, samples=20 00:26:28.053 iops : min= 192, max= 332, avg=259.70, stdev=40.37, samples=20 00:26:28.053 lat (msec) : 50=28.58%, 100=69.13%, 250=2.30% 00:26:28.053 cpu : usr=39.18%, sys=0.64%, ctx=1138, majf=0, minf=9 00:26:28.053 IO depths : 1=1.4%, 2=3.0%, 4=10.6%, 8=72.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:28.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 issued rwts: total=2614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.053 filename1: (groupid=0, jobs=1): err= 0: pid=92011: Sun Nov 17 22:26:22 2024 00:26:28.053 read: IOPS=290, BW=1162KiB/s (1190kB/s)(11.4MiB/10045msec) 00:26:28.053 slat (usec): min=3, max=8022, avg=14.59, stdev=148.48 00:26:28.053 clat (msec): min=13, max=117, avg=54.93, stdev=18.23 00:26:28.053 lat (msec): min=13, max=117, avg=54.95, stdev=18.23 00:26:28.053 clat percentiles (msec): 00:26:28.053 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 39], 00:26:28.053 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 58], 00:26:28.053 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 86], 00:26:28.053 | 99.00th=[ 109], 99.50th=[ 117], 99.90th=[ 118], 99.95th=[ 118], 00:26:28.053 | 99.99th=[ 118] 00:26:28.053 bw ( KiB/s): min= 768, max= 1504, per=4.65%, avg=1160.80, stdev=224.60, samples=20 00:26:28.053 iops : min= 192, max= 376, avg=290.20, stdev=56.15, samples=20 00:26:28.053 lat (msec) : 20=0.55%, 50=46.74%, 100=50.34%, 250=2.36% 00:26:28.053 cpu : usr=39.07%, sys=0.59%, ctx=1036, majf=0, minf=9 00:26:28.053 IO depths : 1=0.5%, 2=1.5%, 4=8.0%, 8=76.9%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:28.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 issued rwts: total=2918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.053 filename2: (groupid=0, jobs=1): err= 0: pid=92012: Sun Nov 17 22:26:22 2024 00:26:28.053 read: IOPS=276, BW=1105KiB/s (1131kB/s)(10.8MiB/10016msec) 00:26:28.053 slat (usec): min=4, max=8023, avg=14.37, stdev=152.50 00:26:28.053 clat (msec): min=9, max=141, avg=57.80, stdev=18.49 00:26:28.053 lat (msec): min=9, max=141, avg=57.82, stdev=18.49 00:26:28.053 clat percentiles (msec): 00:26:28.053 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 44], 00:26:28.053 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 61], 00:26:28.053 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 92], 00:26:28.053 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 142], 99.95th=[ 142], 00:26:28.053 | 99.99th=[ 142] 00:26:28.053 bw ( KiB/s): min= 776, max= 1496, per=4.42%, avg=1102.40, stdev=180.81, samples=20 00:26:28.053 iops : min= 194, max= 374, avg=275.60, stdev=45.20, samples=20 00:26:28.053 lat (msec) : 10=0.51%, 20=0.58%, 50=38.79%, 100=57.66%, 250=2.46% 00:26:28.053 cpu : usr=34.48%, sys=0.51%, ctx=965, majf=0, minf=9 00:26:28.053 IO depths : 1=0.8%, 2=2.0%, 4=9.6%, 8=74.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:28.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 issued rwts: total=2766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.053 filename2: (groupid=0, jobs=1): err= 0: pid=92013: Sun Nov 17 22:26:22 2024 00:26:28.053 read: IOPS=253, BW=1012KiB/s (1037kB/s)(9.90MiB/10016msec) 00:26:28.053 slat (usec): min=3, max=8018, avg=18.28, stdev=199.11 00:26:28.053 clat (msec): min=26, max=132, avg=63.04, stdev=16.61 00:26:28.053 lat (msec): min=26, max=132, avg=63.06, stdev=16.62 00:26:28.053 clat percentiles (msec): 00:26:28.053 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 48], 00:26:28.053 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 67], 00:26:28.053 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 85], 95.00th=[ 92], 00:26:28.053 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 133], 99.95th=[ 133], 00:26:28.053 | 99.99th=[ 133] 00:26:28.053 bw ( KiB/s): min= 768, max= 1248, per=4.06%, avg=1013.30, stdev=144.74, samples=20 00:26:28.053 iops : min= 192, max= 312, avg=253.30, stdev=36.18, samples=20 00:26:28.053 lat (msec) : 50=26.55%, 100=71.52%, 250=1.93% 00:26:28.053 cpu : usr=32.71%, sys=0.48%, ctx=1024, majf=0, minf=9 00:26:28.053 IO depths : 1=1.3%, 2=3.4%, 4=12.5%, 8=70.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:28.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 issued rwts: total=2535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.053 filename2: (groupid=0, jobs=1): err= 0: pid=92014: Sun Nov 17 22:26:22 2024 00:26:28.053 read: IOPS=306, BW=1227KiB/s (1256kB/s)(12.0MiB/10041msec) 00:26:28.053 slat (usec): min=4, max=4027, avg=13.44, stdev=100.57 00:26:28.053 clat (msec): min=18, max=120, avg=52.05, stdev=15.94 00:26:28.053 lat (msec): min=18, max=120, avg=52.06, stdev=15.94 00:26:28.053 clat percentiles (msec): 00:26:28.053 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 39], 00:26:28.053 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 56], 00:26:28.053 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 83], 00:26:28.053 | 99.00th=[ 96], 99.50th=[ 102], 99.90th=[ 114], 99.95th=[ 114], 00:26:28.053 | 99.99th=[ 122] 00:26:28.053 bw ( KiB/s): min= 896, max= 1600, per=4.91%, avg=1225.30, stdev=208.97, samples=20 00:26:28.053 iops : min= 224, max= 400, avg=306.30, stdev=52.27, samples=20 00:26:28.053 lat (msec) : 20=0.19%, 50=51.74%, 100=47.32%, 250=0.75% 00:26:28.053 cpu : usr=44.25%, sys=0.54%, ctx=1271, majf=0, minf=9 00:26:28.053 IO depths : 1=0.5%, 2=1.1%, 4=6.8%, 8=78.2%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:28.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 complete : 0=0.0%, 4=89.3%, 8=6.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.053 issued rwts: total=3079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.054 filename2: (groupid=0, jobs=1): err= 0: pid=92015: Sun Nov 17 22:26:22 2024 00:26:28.054 read: IOPS=236, BW=946KiB/s (968kB/s)(9460KiB/10004msec) 00:26:28.054 slat (usec): min=4, max=8032, avg=15.93, stdev=165.10 00:26:28.054 clat (msec): min=25, max=142, avg=67.57, stdev=17.59 00:26:28.054 lat (msec): min=25, max=142, avg=67.58, stdev=17.59 00:26:28.054 clat percentiles (msec): 00:26:28.054 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:26:28.054 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 71], 00:26:28.054 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 100], 00:26:28.054 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 142], 99.95th=[ 142], 00:26:28.054 | 99.99th=[ 142] 00:26:28.054 bw ( KiB/s): min= 728, max= 1072, per=3.77%, avg=939.79, stdev=85.81, samples=19 00:26:28.054 iops : min= 182, max= 268, avg=234.95, stdev=21.45, samples=19 00:26:28.054 lat (msec) : 50=20.47%, 100=74.76%, 250=4.78% 00:26:28.054 cpu : usr=32.62%, sys=0.52%, ctx=869, majf=0, minf=10 00:26:28.054 IO depths : 1=2.8%, 2=5.8%, 4=15.3%, 8=65.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 issued rwts: total=2365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.054 filename2: (groupid=0, jobs=1): err= 0: pid=92016: Sun Nov 17 22:26:22 2024 00:26:28.054 read: IOPS=271, BW=1088KiB/s (1114kB/s)(10.6MiB/10025msec) 00:26:28.054 slat (usec): min=3, max=6530, avg=16.68, stdev=153.87 00:26:28.054 clat (msec): min=21, max=135, avg=58.74, stdev=21.26 00:26:28.054 lat (msec): min=21, max=135, avg=58.75, stdev=21.26 00:26:28.054 clat percentiles (msec): 00:26:28.054 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 40], 00:26:28.054 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 59], 00:26:28.054 | 70.00th=[ 67], 80.00th=[ 75], 90.00th=[ 89], 95.00th=[ 103], 00:26:28.054 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 136], 00:26:28.054 | 99.99th=[ 136] 00:26:28.054 bw ( KiB/s): min= 640, max= 1552, per=4.34%, avg=1083.75, stdev=251.79, samples=20 00:26:28.054 iops : min= 160, max= 388, avg=270.90, stdev=62.95, samples=20 00:26:28.054 lat (msec) : 50=42.85%, 100=51.76%, 250=5.39% 00:26:28.054 cpu : usr=39.89%, sys=0.61%, ctx=1263, majf=0, minf=9 00:26:28.054 IO depths : 1=1.2%, 2=2.7%, 4=9.7%, 8=74.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 issued rwts: total=2726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.054 filename2: (groupid=0, jobs=1): err= 0: pid=92017: Sun Nov 17 22:26:22 2024 00:26:28.054 read: IOPS=260, BW=1040KiB/s (1065kB/s)(10.2MiB/10041msec) 00:26:28.054 slat (usec): min=4, max=11021, avg=23.06, stdev=309.28 00:26:28.054 clat (msec): min=26, max=137, avg=61.39, stdev=19.19 00:26:28.054 lat (msec): min=26, max=137, avg=61.41, stdev=19.18 00:26:28.054 clat percentiles (msec): 00:26:28.054 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 47], 00:26:28.054 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 61], 00:26:28.054 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 86], 95.00th=[ 96], 00:26:28.054 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 138], 00:26:28.054 | 99.99th=[ 138] 00:26:28.054 bw ( KiB/s): min= 696, max= 1328, per=4.16%, avg=1038.65, stdev=179.75, samples=20 00:26:28.054 iops : min= 174, max= 332, avg=259.65, stdev=44.94, samples=20 00:26:28.054 lat (msec) : 50=32.75%, 100=63.27%, 250=3.98% 00:26:28.054 cpu : usr=32.88%, sys=0.34%, ctx=869, majf=0, minf=9 00:26:28.054 IO depths : 1=0.8%, 2=1.7%, 4=9.1%, 8=75.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 complete : 0=0.0%, 4=90.1%, 8=5.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 issued rwts: total=2611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.054 filename2: (groupid=0, jobs=1): err= 0: pid=92018: Sun Nov 17 22:26:22 2024 00:26:28.054 read: IOPS=256, BW=1027KiB/s (1051kB/s)(10.1MiB/10027msec) 00:26:28.054 slat (usec): min=3, max=8017, avg=18.58, stdev=193.61 00:26:28.054 clat (msec): min=25, max=125, avg=62.20, stdev=17.61 00:26:28.054 lat (msec): min=25, max=125, avg=62.22, stdev=17.61 00:26:28.054 clat percentiles (msec): 00:26:28.054 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:26:28.054 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:26:28.054 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 86], 95.00th=[ 92], 00:26:28.054 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:26:28.054 | 99.99th=[ 126] 00:26:28.054 bw ( KiB/s): min= 768, max= 1336, per=4.10%, avg=1022.85, stdev=158.95, samples=20 00:26:28.054 iops : min= 192, max= 334, avg=255.70, stdev=39.74, samples=20 00:26:28.054 lat (msec) : 50=25.33%, 100=72.11%, 250=2.56% 00:26:28.054 cpu : usr=45.67%, sys=0.68%, ctx=1224, majf=0, minf=9 00:26:28.054 IO depths : 1=2.1%, 2=4.9%, 4=13.5%, 8=68.5%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 issued rwts: total=2574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.054 filename2: (groupid=0, jobs=1): err= 0: pid=92019: Sun Nov 17 22:26:22 2024 00:26:28.054 read: IOPS=257, BW=1031KiB/s (1055kB/s)(10.1MiB/10040msec) 00:26:28.054 slat (usec): min=4, max=8023, avg=20.62, stdev=208.15 00:26:28.054 clat (msec): min=22, max=154, avg=61.96, stdev=21.03 00:26:28.054 lat (msec): min=22, max=154, avg=61.98, stdev=21.04 00:26:28.054 clat percentiles (msec): 00:26:28.054 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 45], 00:26:28.054 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:26:28.054 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 89], 95.00th=[ 103], 00:26:28.054 | 99.00th=[ 123], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 155], 00:26:28.054 | 99.99th=[ 155] 00:26:28.054 bw ( KiB/s): min= 640, max= 1352, per=4.12%, avg=1028.50, stdev=191.71, samples=20 00:26:28.054 iops : min= 160, max= 338, avg=257.10, stdev=47.93, samples=20 00:26:28.054 lat (msec) : 50=33.01%, 100=61.85%, 250=5.14% 00:26:28.054 cpu : usr=40.11%, sys=0.62%, ctx=1116, majf=0, minf=9 00:26:28.054 IO depths : 1=1.6%, 2=3.7%, 4=13.1%, 8=70.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.054 issued rwts: total=2587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:28.054 00:26:28.054 Run status group 0 (all jobs): 00:26:28.054 READ: bw=24.4MiB/s (25.5MB/s), 939KiB/s-1227KiB/s (961kB/s-1256kB/s), io=245MiB (257MB), run=10002-10045msec 00:26:28.054 22:26:22 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:28.054 22:26:22 -- target/dif.sh@43 -- # local sub 00:26:28.054 22:26:22 -- target/dif.sh@45 -- # for sub in "$@" 00:26:28.054 22:26:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:28.054 22:26:22 -- target/dif.sh@36 -- # local sub_id=0 00:26:28.054 22:26:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:28.054 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.054 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.054 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.054 22:26:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:28.054 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.054 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.054 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.054 22:26:22 -- target/dif.sh@45 -- # for sub in "$@" 00:26:28.054 22:26:22 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:28.054 22:26:22 -- target/dif.sh@36 -- # local sub_id=1 00:26:28.054 22:26:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.054 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.054 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.054 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.054 22:26:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:28.054 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.054 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.054 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.054 22:26:22 -- target/dif.sh@45 -- # for sub in "$@" 00:26:28.054 22:26:22 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:28.054 22:26:22 -- target/dif.sh@36 -- # local sub_id=2 00:26:28.054 22:26:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:28.054 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.054 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.054 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.054 22:26:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:28.055 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.055 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.055 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.055 22:26:22 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:28.055 22:26:22 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:28.055 22:26:22 -- target/dif.sh@115 -- # numjobs=2 00:26:28.055 22:26:22 -- target/dif.sh@115 -- # iodepth=8 00:26:28.055 22:26:22 -- target/dif.sh@115 -- # runtime=5 00:26:28.055 22:26:22 -- target/dif.sh@115 -- # files=1 00:26:28.055 22:26:22 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:28.055 22:26:22 -- target/dif.sh@28 -- # local sub 00:26:28.055 22:26:22 -- target/dif.sh@30 -- # for sub in "$@" 00:26:28.055 22:26:22 -- target/dif.sh@31 -- # create_subsystem 0 00:26:28.055 22:26:22 -- target/dif.sh@18 -- # local sub_id=0 00:26:28.055 22:26:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:28.055 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.055 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.055 bdev_null0 00:26:28.055 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.055 22:26:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:28.055 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.055 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.055 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.055 22:26:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:28.055 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.055 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.055 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.055 22:26:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:28.055 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.055 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.055 [2024-11-17 22:26:22.925092] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.055 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.055 22:26:22 -- target/dif.sh@30 -- # for sub in "$@" 00:26:28.055 22:26:22 -- target/dif.sh@31 -- # create_subsystem 1 00:26:28.055 22:26:22 -- target/dif.sh@18 -- # local sub_id=1 00:26:28.055 22:26:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:28.055 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.055 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.055 bdev_null1 00:26:28.055 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.055 22:26:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:28.055 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.055 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.055 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.055 22:26:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:28.055 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.055 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.055 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.055 22:26:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.055 22:26:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.055 22:26:22 -- common/autotest_common.sh@10 -- # set +x 00:26:28.055 22:26:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.055 22:26:22 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:28.055 22:26:22 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:28.055 22:26:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:28.055 22:26:22 -- nvmf/common.sh@520 -- # config=() 00:26:28.055 22:26:22 -- nvmf/common.sh@520 -- # local subsystem config 00:26:28.055 22:26:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.055 22:26:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.055 { 00:26:28.055 "params": { 00:26:28.055 "name": "Nvme$subsystem", 00:26:28.055 "trtype": "$TEST_TRANSPORT", 00:26:28.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.055 "adrfam": "ipv4", 00:26:28.055 "trsvcid": "$NVMF_PORT", 00:26:28.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.055 "hdgst": ${hdgst:-false}, 00:26:28.055 "ddgst": ${ddgst:-false} 00:26:28.055 }, 00:26:28.055 "method": "bdev_nvme_attach_controller" 00:26:28.055 } 00:26:28.055 EOF 00:26:28.055 )") 00:26:28.055 22:26:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:28.055 22:26:22 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:28.055 22:26:22 -- target/dif.sh@82 -- # gen_fio_conf 00:26:28.055 22:26:22 -- target/dif.sh@54 -- # local file 00:26:28.055 22:26:22 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:28.055 22:26:22 -- target/dif.sh@56 -- # cat 00:26:28.055 22:26:22 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:28.055 22:26:22 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:28.055 22:26:22 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.055 22:26:22 -- common/autotest_common.sh@1330 -- # shift 00:26:28.055 22:26:22 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:28.055 22:26:22 -- nvmf/common.sh@542 -- # cat 00:26:28.055 22:26:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.055 22:26:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.055 22:26:22 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:28.055 22:26:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:28.055 22:26:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:28.055 22:26:22 -- target/dif.sh@72 -- # (( file <= files )) 00:26:28.055 22:26:22 -- target/dif.sh@73 -- # cat 00:26:28.055 22:26:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.055 22:26:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.055 { 00:26:28.055 "params": { 00:26:28.055 "name": "Nvme$subsystem", 00:26:28.055 "trtype": "$TEST_TRANSPORT", 00:26:28.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.055 "adrfam": "ipv4", 00:26:28.055 "trsvcid": "$NVMF_PORT", 00:26:28.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.055 "hdgst": ${hdgst:-false}, 00:26:28.055 "ddgst": ${ddgst:-false} 00:26:28.055 }, 00:26:28.055 "method": "bdev_nvme_attach_controller" 00:26:28.055 } 00:26:28.055 EOF 00:26:28.055 )") 00:26:28.055 22:26:22 -- target/dif.sh@72 -- # (( file++ )) 00:26:28.055 22:26:22 -- target/dif.sh@72 -- # (( file <= files )) 00:26:28.055 22:26:22 -- nvmf/common.sh@542 -- # cat 00:26:28.055 22:26:22 -- nvmf/common.sh@544 -- # jq . 00:26:28.055 22:26:22 -- nvmf/common.sh@545 -- # IFS=, 00:26:28.055 22:26:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:28.055 "params": { 00:26:28.055 "name": "Nvme0", 00:26:28.055 "trtype": "tcp", 00:26:28.055 "traddr": "10.0.0.2", 00:26:28.055 "adrfam": "ipv4", 00:26:28.055 "trsvcid": "4420", 00:26:28.055 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:28.055 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:28.055 "hdgst": false, 00:26:28.055 "ddgst": false 00:26:28.055 }, 00:26:28.055 "method": "bdev_nvme_attach_controller" 00:26:28.055 },{ 00:26:28.055 "params": { 00:26:28.055 "name": "Nvme1", 00:26:28.055 "trtype": "tcp", 00:26:28.055 "traddr": "10.0.0.2", 00:26:28.055 "adrfam": "ipv4", 00:26:28.055 "trsvcid": "4420", 00:26:28.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:28.055 "hdgst": false, 00:26:28.055 "ddgst": false 00:26:28.055 }, 00:26:28.055 "method": "bdev_nvme_attach_controller" 00:26:28.055 }' 00:26:28.055 22:26:22 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:28.055 22:26:22 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:28.055 22:26:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.055 22:26:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.055 22:26:22 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:28.055 22:26:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:28.055 22:26:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:28.055 22:26:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:28.055 22:26:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:28.055 22:26:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:28.055 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:28.055 ... 00:26:28.055 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:28.055 ... 00:26:28.055 fio-3.35 00:26:28.055 Starting 4 threads 00:26:28.055 [2024-11-17 22:26:23.661697] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:28.055 [2024-11-17 22:26:23.661779] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:32.248 00:26:32.248 filename0: (groupid=0, jobs=1): err= 0: pid=92156: Sun Nov 17 22:26:28 2024 00:26:32.248 read: IOPS=2405, BW=18.8MiB/s (19.7MB/s)(94.0MiB/5001msec) 00:26:32.248 slat (nsec): min=6184, max=86193, avg=15483.41, stdev=6788.27 00:26:32.248 clat (usec): min=1581, max=6306, avg=3247.91, stdev=158.75 00:26:32.248 lat (usec): min=1591, max=6332, avg=3263.39, stdev=159.25 00:26:32.248 clat percentiles (usec): 00:26:32.248 | 1.00th=[ 3032], 5.00th=[ 3130], 10.00th=[ 3130], 20.00th=[ 3163], 00:26:32.248 | 30.00th=[ 3195], 40.00th=[ 3228], 50.00th=[ 3228], 60.00th=[ 3261], 00:26:32.248 | 70.00th=[ 3261], 80.00th=[ 3294], 90.00th=[ 3359], 95.00th=[ 3458], 00:26:32.248 | 99.00th=[ 3851], 99.50th=[ 3982], 99.90th=[ 4293], 99.95th=[ 6259], 00:26:32.248 | 99.99th=[ 6325] 00:26:32.248 bw ( KiB/s): min=18981, max=19456, per=24.99%, avg=19256.67, stdev=151.83, samples=9 00:26:32.248 iops : min= 2372, max= 2432, avg=2407.00, stdev=19.13, samples=9 00:26:32.248 lat (msec) : 2=0.01%, 4=99.56%, 10=0.43% 00:26:32.248 cpu : usr=95.26%, sys=3.44%, ctx=8, majf=0, minf=10 00:26:32.248 IO depths : 1=11.4%, 2=25.0%, 4=50.0%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.248 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.248 issued rwts: total=12032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:32.248 filename0: (groupid=0, jobs=1): err= 0: pid=92157: Sun Nov 17 22:26:28 2024 00:26:32.248 read: IOPS=2408, BW=18.8MiB/s (19.7MB/s)(94.1MiB/5002msec) 00:26:32.248 slat (nsec): min=5812, max=85517, avg=9966.86, stdev=7444.32 00:26:32.248 clat (usec): min=1923, max=4178, avg=3266.76, stdev=118.55 00:26:32.248 lat (usec): min=1938, max=4199, avg=3276.73, stdev=118.95 00:26:32.248 clat percentiles (usec): 00:26:32.248 | 1.00th=[ 2999], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3195], 00:26:32.248 | 30.00th=[ 3228], 40.00th=[ 3228], 50.00th=[ 3261], 60.00th=[ 3261], 00:26:32.248 | 70.00th=[ 3294], 80.00th=[ 3326], 90.00th=[ 3392], 95.00th=[ 3458], 00:26:32.248 | 99.00th=[ 3654], 99.50th=[ 3785], 99.90th=[ 4080], 99.95th=[ 4146], 00:26:32.248 | 99.99th=[ 4178] 00:26:32.248 bw ( KiB/s): min=19161, max=19456, per=25.03%, avg=19281.00, stdev=115.28, samples=9 00:26:32.248 iops : min= 2395, max= 2432, avg=2410.11, stdev=14.43, samples=9 00:26:32.248 lat (msec) : 2=0.06%, 4=99.68%, 10=0.27% 00:26:32.248 cpu : usr=95.38%, sys=3.42%, ctx=24, majf=0, minf=0 00:26:32.248 IO depths : 1=11.4%, 2=25.0%, 4=50.0%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.248 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.248 issued rwts: total=12048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:32.248 filename1: (groupid=0, jobs=1): err= 0: pid=92158: Sun Nov 17 22:26:28 2024 00:26:32.248 read: IOPS=2408, BW=18.8MiB/s (19.7MB/s)(94.1MiB/5001msec) 00:26:32.248 slat (nsec): min=6078, max=84767, avg=13009.05, stdev=6012.14 00:26:32.248 clat (usec): min=454, max=5786, avg=3281.56, stdev=265.47 00:26:32.248 lat (usec): min=460, max=5797, avg=3294.57, stdev=265.45 00:26:32.248 clat percentiles (usec): 00:26:32.248 | 1.00th=[ 2671], 5.00th=[ 2802], 10.00th=[ 3130], 20.00th=[ 3195], 00:26:32.248 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3261], 60.00th=[ 3294], 00:26:32.248 | 70.00th=[ 3294], 80.00th=[ 3359], 90.00th=[ 3556], 95.00th=[ 3752], 00:26:32.248 | 99.00th=[ 4047], 99.50th=[ 4228], 99.90th=[ 5080], 99.95th=[ 5145], 00:26:32.248 | 99.99th=[ 5800] 00:26:32.248 bw ( KiB/s): min=18992, max=19504, per=25.01%, avg=19266.44, stdev=164.26, samples=9 00:26:32.248 iops : min= 2374, max= 2438, avg=2408.22, stdev=20.58, samples=9 00:26:32.248 lat (usec) : 500=0.02%, 1000=0.07% 00:26:32.248 lat (msec) : 2=0.04%, 4=98.64%, 10=1.22% 00:26:32.248 cpu : usr=94.86%, sys=3.86%, ctx=97, majf=0, minf=9 00:26:32.248 IO depths : 1=0.3%, 2=0.9%, 4=74.1%, 8=24.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.248 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.248 issued rwts: total=12043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:32.248 filename1: (groupid=0, jobs=1): err= 0: pid=92159: Sun Nov 17 22:26:28 2024 00:26:32.248 read: IOPS=2408, BW=18.8MiB/s (19.7MB/s)(94.1MiB/5002msec) 00:26:32.248 slat (nsec): min=6313, max=87446, avg=15930.47, stdev=7062.94 00:26:32.248 clat (usec): min=1861, max=4445, avg=3243.28, stdev=139.46 00:26:32.248 lat (usec): min=1880, max=4455, avg=3259.21, stdev=139.86 00:26:32.248 clat percentiles (usec): 00:26:32.248 | 1.00th=[ 3032], 5.00th=[ 3097], 10.00th=[ 3130], 20.00th=[ 3163], 00:26:32.248 | 30.00th=[ 3195], 40.00th=[ 3228], 50.00th=[ 3228], 60.00th=[ 3261], 00:26:32.248 | 70.00th=[ 3261], 80.00th=[ 3294], 90.00th=[ 3359], 95.00th=[ 3458], 00:26:32.248 | 99.00th=[ 3720], 99.50th=[ 3982], 99.90th=[ 4146], 99.95th=[ 4228], 00:26:32.248 | 99.99th=[ 4293] 00:26:32.248 bw ( KiB/s): min=19161, max=19456, per=25.03%, avg=19281.00, stdev=115.28, samples=9 00:26:32.248 iops : min= 2395, max= 2432, avg=2410.11, stdev=14.43, samples=9 00:26:32.248 lat (msec) : 2=0.06%, 4=99.56%, 10=0.38% 00:26:32.248 cpu : usr=94.48%, sys=4.18%, ctx=8, majf=0, minf=9 00:26:32.248 IO depths : 1=11.8%, 2=25.0%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.248 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.248 issued rwts: total=12048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.248 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:32.248 00:26:32.248 Run status group 0 (all jobs): 00:26:32.248 READ: bw=75.2MiB/s (78.9MB/s), 18.8MiB/s-18.8MiB/s (19.7MB/s-19.7MB/s), io=376MiB (395MB), run=5001-5002msec 00:26:32.507 22:26:29 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:32.507 22:26:29 -- target/dif.sh@43 -- # local sub 00:26:32.507 22:26:29 -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.507 22:26:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:32.507 22:26:29 -- target/dif.sh@36 -- # local sub_id=0 00:26:32.507 22:26:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:32.507 22:26:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.507 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.507 22:26:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.507 22:26:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:32.507 22:26:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.507 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.507 22:26:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.507 22:26:29 -- target/dif.sh@45 -- # for sub in "$@" 00:26:32.507 22:26:29 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:32.507 22:26:29 -- target/dif.sh@36 -- # local sub_id=1 00:26:32.507 22:26:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.507 22:26:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.507 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.507 22:26:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.507 22:26:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:32.507 22:26:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.507 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.507 22:26:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.507 00:26:32.507 real 0m23.816s 00:26:32.507 user 2m8.088s 00:26:32.507 sys 0m3.644s 00:26:32.507 22:26:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:32.507 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.507 ************************************ 00:26:32.507 END TEST fio_dif_rand_params 00:26:32.507 ************************************ 00:26:32.507 22:26:29 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:32.507 22:26:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:32.507 22:26:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:32.507 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.507 ************************************ 00:26:32.507 START TEST fio_dif_digest 00:26:32.507 ************************************ 00:26:32.507 22:26:29 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:32.507 22:26:29 -- target/dif.sh@123 -- # local NULL_DIF 00:26:32.507 22:26:29 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:32.507 22:26:29 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:32.507 22:26:29 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:32.507 22:26:29 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:32.507 22:26:29 -- target/dif.sh@127 -- # numjobs=3 00:26:32.507 22:26:29 -- target/dif.sh@127 -- # iodepth=3 00:26:32.507 22:26:29 -- target/dif.sh@127 -- # runtime=10 00:26:32.507 22:26:29 -- target/dif.sh@128 -- # hdgst=true 00:26:32.508 22:26:29 -- target/dif.sh@128 -- # ddgst=true 00:26:32.508 22:26:29 -- target/dif.sh@130 -- # create_subsystems 0 00:26:32.508 22:26:29 -- target/dif.sh@28 -- # local sub 00:26:32.508 22:26:29 -- target/dif.sh@30 -- # for sub in "$@" 00:26:32.508 22:26:29 -- target/dif.sh@31 -- # create_subsystem 0 00:26:32.508 22:26:29 -- target/dif.sh@18 -- # local sub_id=0 00:26:32.508 22:26:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:32.508 22:26:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.508 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.508 bdev_null0 00:26:32.508 22:26:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.508 22:26:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:32.508 22:26:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.508 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.508 22:26:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.508 22:26:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:32.508 22:26:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.508 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.765 22:26:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.765 22:26:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:32.765 22:26:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.765 22:26:29 -- common/autotest_common.sh@10 -- # set +x 00:26:32.765 [2024-11-17 22:26:29.130924] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.765 22:26:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.765 22:26:29 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:32.765 22:26:29 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:32.765 22:26:29 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:32.765 22:26:29 -- nvmf/common.sh@520 -- # config=() 00:26:32.765 22:26:29 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.765 22:26:29 -- nvmf/common.sh@520 -- # local subsystem config 00:26:32.765 22:26:29 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.765 22:26:29 -- target/dif.sh@82 -- # gen_fio_conf 00:26:32.765 22:26:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:32.765 22:26:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:32.765 { 00:26:32.765 "params": { 00:26:32.765 "name": "Nvme$subsystem", 00:26:32.765 "trtype": "$TEST_TRANSPORT", 00:26:32.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.765 "adrfam": "ipv4", 00:26:32.765 "trsvcid": "$NVMF_PORT", 00:26:32.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.765 "hdgst": ${hdgst:-false}, 00:26:32.765 "ddgst": ${ddgst:-false} 00:26:32.765 }, 00:26:32.765 "method": "bdev_nvme_attach_controller" 00:26:32.766 } 00:26:32.766 EOF 00:26:32.766 )") 00:26:32.766 22:26:29 -- target/dif.sh@54 -- # local file 00:26:32.766 22:26:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:32.766 22:26:29 -- target/dif.sh@56 -- # cat 00:26:32.766 22:26:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:32.766 22:26:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:32.766 22:26:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:32.766 22:26:29 -- common/autotest_common.sh@1330 -- # shift 00:26:32.766 22:26:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:32.766 22:26:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.766 22:26:29 -- nvmf/common.sh@542 -- # cat 00:26:32.766 22:26:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:32.766 22:26:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:32.766 22:26:29 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:32.766 22:26:29 -- target/dif.sh@72 -- # (( file <= files )) 00:26:32.766 22:26:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:32.766 22:26:29 -- nvmf/common.sh@544 -- # jq . 00:26:32.766 22:26:29 -- nvmf/common.sh@545 -- # IFS=, 00:26:32.766 22:26:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:32.766 "params": { 00:26:32.766 "name": "Nvme0", 00:26:32.766 "trtype": "tcp", 00:26:32.766 "traddr": "10.0.0.2", 00:26:32.766 "adrfam": "ipv4", 00:26:32.766 "trsvcid": "4420", 00:26:32.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:32.766 "hdgst": true, 00:26:32.766 "ddgst": true 00:26:32.766 }, 00:26:32.766 "method": "bdev_nvme_attach_controller" 00:26:32.766 }' 00:26:32.766 22:26:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:32.766 22:26:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:32.766 22:26:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.766 22:26:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:32.766 22:26:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:32.766 22:26:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:32.766 22:26:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:32.766 22:26:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:32.766 22:26:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:32.766 22:26:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.766 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:32.766 ... 00:26:32.766 fio-3.35 00:26:32.766 Starting 3 threads 00:26:33.331 [2024-11-17 22:26:29.720441] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:33.331 [2024-11-17 22:26:29.720534] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:43.300 00:26:43.300 filename0: (groupid=0, jobs=1): err= 0: pid=92262: Sun Nov 17 22:26:39 2024 00:26:43.300 read: IOPS=259, BW=32.5MiB/s (34.1MB/s)(325MiB/10003msec) 00:26:43.300 slat (nsec): min=3701, max=68058, avg=17114.14, stdev=7050.31 00:26:43.300 clat (usec): min=7314, max=90490, avg=11525.52, stdev=8294.65 00:26:43.300 lat (usec): min=7326, max=90509, avg=11542.63, stdev=8294.82 00:26:43.300 clat percentiles (usec): 00:26:43.300 | 1.00th=[ 8094], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:26:43.300 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:26:43.300 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[11863], 00:26:43.300 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52691], 99.95th=[89654], 00:26:43.300 | 99.99th=[90702] 00:26:43.300 bw ( KiB/s): min=23040, max=39424, per=33.96%, avg=33482.11, stdev=3957.28, samples=19 00:26:43.300 iops : min= 180, max= 308, avg=261.58, stdev=30.92, samples=19 00:26:43.300 lat (msec) : 10=55.21%, 20=40.71%, 50=1.39%, 100=2.69% 00:26:43.300 cpu : usr=95.15%, sys=3.45%, ctx=28, majf=0, minf=9 00:26:43.300 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.300 issued rwts: total=2599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.300 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:43.300 filename0: (groupid=0, jobs=1): err= 0: pid=92263: Sun Nov 17 22:26:39 2024 00:26:43.300 read: IOPS=274, BW=34.3MiB/s (35.9MB/s)(343MiB/10004msec) 00:26:43.300 slat (usec): min=6, max=138, avg=13.76, stdev= 6.74 00:26:43.300 clat (usec): min=5438, max=53361, avg=10926.55, stdev=2466.47 00:26:43.300 lat (usec): min=5456, max=53372, avg=10940.31, stdev=2467.78 00:26:43.300 clat percentiles (usec): 00:26:43.300 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 8455], 00:26:43.300 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:26:43.300 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13042], 95.00th=[13304], 00:26:43.300 | 99.00th=[14091], 99.50th=[14484], 99.90th=[48497], 99.95th=[49021], 00:26:43.300 | 99.99th=[53216] 00:26:43.300 bw ( KiB/s): min=29696, max=41728, per=35.49%, avg=34991.16, stdev=2710.66, samples=19 00:26:43.300 iops : min= 232, max= 326, avg=273.37, stdev=21.18, samples=19 00:26:43.300 lat (msec) : 10=24.22%, 20=75.67%, 50=0.07%, 100=0.04% 00:26:43.300 cpu : usr=94.20%, sys=4.20%, ctx=7, majf=0, minf=0 00:26:43.300 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.300 issued rwts: total=2742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.300 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:43.300 filename0: (groupid=0, jobs=1): err= 0: pid=92264: Sun Nov 17 22:26:39 2024 00:26:43.300 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(300MiB/10045msec) 00:26:43.300 slat (nsec): min=6040, max=74693, avg=12459.52, stdev=6616.24 00:26:43.300 clat (usec): min=4661, max=46560, avg=12532.23, stdev=2228.40 00:26:43.300 lat (usec): min=4672, max=46570, avg=12544.69, stdev=2229.37 00:26:43.300 clat percentiles (usec): 00:26:43.300 | 1.00th=[ 8029], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10552], 00:26:43.300 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:26:43.300 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:26:43.300 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16909], 99.95th=[44303], 00:26:43.300 | 99.99th=[46400] 00:26:43.300 bw ( KiB/s): min=28416, max=34560, per=31.09%, avg=30659.05, stdev=1549.33, samples=20 00:26:43.301 iops : min= 222, max= 270, avg=239.50, stdev=12.10, samples=20 00:26:43.301 lat (msec) : 10=18.69%, 20=81.23%, 50=0.08% 00:26:43.301 cpu : usr=94.45%, sys=4.01%, ctx=114, majf=0, minf=9 00:26:43.301 IO depths : 1=16.3%, 2=83.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.301 issued rwts: total=2397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.301 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:43.301 00:26:43.301 Run status group 0 (all jobs): 00:26:43.301 READ: bw=96.3MiB/s (101MB/s), 29.8MiB/s-34.3MiB/s (31.3MB/s-35.9MB/s), io=967MiB (1014MB), run=10003-10045msec 00:26:43.562 22:26:40 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:43.562 22:26:40 -- target/dif.sh@43 -- # local sub 00:26:43.562 22:26:40 -- target/dif.sh@45 -- # for sub in "$@" 00:26:43.562 22:26:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:43.562 22:26:40 -- target/dif.sh@36 -- # local sub_id=0 00:26:43.562 22:26:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:43.562 22:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.562 22:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:43.562 22:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.562 22:26:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:43.562 22:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.562 22:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:43.562 22:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.562 00:26:43.562 real 0m11.053s 00:26:43.562 user 0m29.129s 00:26:43.562 sys 0m1.425s 00:26:43.562 22:26:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:43.562 ************************************ 00:26:43.562 END TEST fio_dif_digest 00:26:43.562 ************************************ 00:26:43.562 22:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:43.821 22:26:40 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:43.821 22:26:40 -- target/dif.sh@147 -- # nvmftestfini 00:26:43.821 22:26:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:43.821 22:26:40 -- nvmf/common.sh@116 -- # sync 00:26:43.821 22:26:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:43.821 22:26:40 -- nvmf/common.sh@119 -- # set +e 00:26:43.821 22:26:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:43.821 22:26:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:43.821 rmmod nvme_tcp 00:26:43.821 rmmod nvme_fabrics 00:26:43.821 rmmod nvme_keyring 00:26:43.821 22:26:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:43.821 22:26:40 -- nvmf/common.sh@123 -- # set -e 00:26:43.821 22:26:40 -- nvmf/common.sh@124 -- # return 0 00:26:43.821 22:26:40 -- nvmf/common.sh@477 -- # '[' -n 91494 ']' 00:26:43.821 22:26:40 -- nvmf/common.sh@478 -- # killprocess 91494 00:26:43.821 22:26:40 -- common/autotest_common.sh@936 -- # '[' -z 91494 ']' 00:26:43.821 22:26:40 -- common/autotest_common.sh@940 -- # kill -0 91494 00:26:43.821 22:26:40 -- common/autotest_common.sh@941 -- # uname 00:26:43.821 22:26:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:43.821 22:26:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91494 00:26:43.821 22:26:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:43.821 killing process with pid 91494 00:26:43.821 22:26:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:43.821 22:26:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91494' 00:26:43.821 22:26:40 -- common/autotest_common.sh@955 -- # kill 91494 00:26:43.821 22:26:40 -- common/autotest_common.sh@960 -- # wait 91494 00:26:44.080 22:26:40 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:44.080 22:26:40 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:44.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:44.339 Waiting for block devices as requested 00:26:44.598 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:44.598 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:44.598 22:26:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:44.598 22:26:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:44.598 22:26:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.598 22:26:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:44.598 22:26:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.598 22:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:44.598 22:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.598 22:26:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:44.598 00:26:44.598 real 1m0.469s 00:26:44.598 user 3m54.229s 00:26:44.598 sys 0m12.911s 00:26:44.598 22:26:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:44.598 22:26:41 -- common/autotest_common.sh@10 -- # set +x 00:26:44.598 ************************************ 00:26:44.598 END TEST nvmf_dif 00:26:44.598 ************************************ 00:26:44.857 22:26:41 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:44.857 22:26:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:44.857 22:26:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:44.857 22:26:41 -- common/autotest_common.sh@10 -- # set +x 00:26:44.857 ************************************ 00:26:44.857 START TEST nvmf_abort_qd_sizes 00:26:44.857 ************************************ 00:26:44.857 22:26:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:44.857 * Looking for test storage... 00:26:44.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:44.857 22:26:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:44.857 22:26:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:44.857 22:26:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:44.857 22:26:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:44.857 22:26:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:44.857 22:26:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:44.857 22:26:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:44.857 22:26:41 -- scripts/common.sh@335 -- # IFS=.-: 00:26:44.857 22:26:41 -- scripts/common.sh@335 -- # read -ra ver1 00:26:44.857 22:26:41 -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.857 22:26:41 -- scripts/common.sh@336 -- # read -ra ver2 00:26:44.857 22:26:41 -- scripts/common.sh@337 -- # local 'op=<' 00:26:44.857 22:26:41 -- scripts/common.sh@339 -- # ver1_l=2 00:26:44.857 22:26:41 -- scripts/common.sh@340 -- # ver2_l=1 00:26:44.857 22:26:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:44.857 22:26:41 -- scripts/common.sh@343 -- # case "$op" in 00:26:44.857 22:26:41 -- scripts/common.sh@344 -- # : 1 00:26:44.857 22:26:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:44.857 22:26:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.857 22:26:41 -- scripts/common.sh@364 -- # decimal 1 00:26:44.857 22:26:41 -- scripts/common.sh@352 -- # local d=1 00:26:44.857 22:26:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.857 22:26:41 -- scripts/common.sh@354 -- # echo 1 00:26:44.857 22:26:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:44.857 22:26:41 -- scripts/common.sh@365 -- # decimal 2 00:26:44.857 22:26:41 -- scripts/common.sh@352 -- # local d=2 00:26:44.857 22:26:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.857 22:26:41 -- scripts/common.sh@354 -- # echo 2 00:26:44.857 22:26:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:44.857 22:26:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:44.857 22:26:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:44.857 22:26:41 -- scripts/common.sh@367 -- # return 0 00:26:44.857 22:26:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.857 22:26:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:44.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.857 --rc genhtml_branch_coverage=1 00:26:44.857 --rc genhtml_function_coverage=1 00:26:44.857 --rc genhtml_legend=1 00:26:44.857 --rc geninfo_all_blocks=1 00:26:44.857 --rc geninfo_unexecuted_blocks=1 00:26:44.857 00:26:44.857 ' 00:26:44.857 22:26:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:44.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.857 --rc genhtml_branch_coverage=1 00:26:44.857 --rc genhtml_function_coverage=1 00:26:44.857 --rc genhtml_legend=1 00:26:44.857 --rc geninfo_all_blocks=1 00:26:44.857 --rc geninfo_unexecuted_blocks=1 00:26:44.857 00:26:44.857 ' 00:26:44.857 22:26:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:44.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.857 --rc genhtml_branch_coverage=1 00:26:44.857 --rc genhtml_function_coverage=1 00:26:44.857 --rc genhtml_legend=1 00:26:44.857 --rc geninfo_all_blocks=1 00:26:44.857 --rc geninfo_unexecuted_blocks=1 00:26:44.857 00:26:44.857 ' 00:26:44.858 22:26:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:44.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.858 --rc genhtml_branch_coverage=1 00:26:44.858 --rc genhtml_function_coverage=1 00:26:44.858 --rc genhtml_legend=1 00:26:44.858 --rc geninfo_all_blocks=1 00:26:44.858 --rc geninfo_unexecuted_blocks=1 00:26:44.858 00:26:44.858 ' 00:26:44.858 22:26:41 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:44.858 22:26:41 -- nvmf/common.sh@7 -- # uname -s 00:26:44.858 22:26:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.858 22:26:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.858 22:26:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.858 22:26:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.858 22:26:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.858 22:26:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.858 22:26:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.858 22:26:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.858 22:26:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.858 22:26:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.858 22:26:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 00:26:44.858 22:26:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=a547cde3-4ce3-4fca-917e-78af6442a671 00:26:44.858 22:26:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.858 22:26:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.858 22:26:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:44.858 22:26:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:44.858 22:26:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.858 22:26:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.858 22:26:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.858 22:26:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.858 22:26:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.858 22:26:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.858 22:26:41 -- paths/export.sh@5 -- # export PATH 00:26:44.858 22:26:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.858 22:26:41 -- nvmf/common.sh@46 -- # : 0 00:26:44.858 22:26:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:44.858 22:26:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:44.858 22:26:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:44.858 22:26:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.858 22:26:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.858 22:26:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:44.858 22:26:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:44.858 22:26:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:44.858 22:26:41 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:44.858 22:26:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:44.858 22:26:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.858 22:26:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:44.858 22:26:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:44.858 22:26:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:44.858 22:26:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.858 22:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:44.858 22:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.858 22:26:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:44.858 22:26:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:44.858 22:26:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:44.858 22:26:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:44.858 22:26:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:44.858 22:26:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:44.858 22:26:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.858 22:26:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.858 22:26:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:44.858 22:26:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:44.858 22:26:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:44.858 22:26:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:44.858 22:26:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:44.858 22:26:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.858 22:26:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:44.858 22:26:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:44.858 22:26:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:44.858 22:26:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:44.858 22:26:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:44.858 22:26:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:44.858 Cannot find device "nvmf_tgt_br" 00:26:44.858 22:26:41 -- nvmf/common.sh@154 -- # true 00:26:44.858 22:26:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:44.858 Cannot find device "nvmf_tgt_br2" 00:26:44.858 22:26:41 -- nvmf/common.sh@155 -- # true 00:26:44.858 22:26:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:44.858 22:26:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:45.116 Cannot find device "nvmf_tgt_br" 00:26:45.116 22:26:41 -- nvmf/common.sh@157 -- # true 00:26:45.116 22:26:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:45.117 Cannot find device "nvmf_tgt_br2" 00:26:45.117 22:26:41 -- nvmf/common.sh@158 -- # true 00:26:45.117 22:26:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:45.117 22:26:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:45.117 22:26:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:45.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:45.117 22:26:41 -- nvmf/common.sh@161 -- # true 00:26:45.117 22:26:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:45.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:45.117 22:26:41 -- nvmf/common.sh@162 -- # true 00:26:45.117 22:26:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:45.117 22:26:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:45.117 22:26:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:45.117 22:26:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:45.117 22:26:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:45.117 22:26:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:45.117 22:26:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:45.117 22:26:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:45.117 22:26:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:45.117 22:26:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:45.117 22:26:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:45.117 22:26:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:45.117 22:26:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:45.117 22:26:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:45.117 22:26:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:45.117 22:26:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:45.117 22:26:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:45.117 22:26:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:45.117 22:26:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:45.117 22:26:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:45.375 22:26:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:45.375 22:26:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:45.375 22:26:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:45.375 22:26:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:45.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:26:45.375 00:26:45.375 --- 10.0.0.2 ping statistics --- 00:26:45.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.375 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:45.375 22:26:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:45.375 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:45.375 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:26:45.375 00:26:45.375 --- 10.0.0.3 ping statistics --- 00:26:45.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.375 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:45.375 22:26:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:45.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:26:45.375 00:26:45.376 --- 10.0.0.1 ping statistics --- 00:26:45.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.376 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:26:45.376 22:26:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.376 22:26:41 -- nvmf/common.sh@421 -- # return 0 00:26:45.376 22:26:41 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:45.376 22:26:41 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:45.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:45.944 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:46.203 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:46.203 22:26:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.203 22:26:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:46.203 22:26:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:46.203 22:26:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.203 22:26:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:46.203 22:26:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:46.203 22:26:42 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:46.203 22:26:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:46.203 22:26:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:46.203 22:26:42 -- common/autotest_common.sh@10 -- # set +x 00:26:46.203 22:26:42 -- nvmf/common.sh@469 -- # nvmfpid=92873 00:26:46.203 22:26:42 -- nvmf/common.sh@470 -- # waitforlisten 92873 00:26:46.203 22:26:42 -- common/autotest_common.sh@829 -- # '[' -z 92873 ']' 00:26:46.203 22:26:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.203 22:26:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:46.203 22:26:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:46.203 22:26:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.203 22:26:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:46.203 22:26:42 -- common/autotest_common.sh@10 -- # set +x 00:26:46.203 [2024-11-17 22:26:42.722224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:46.203 [2024-11-17 22:26:42.722295] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.461 [2024-11-17 22:26:42.853722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.461 [2024-11-17 22:26:42.930291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:46.461 [2024-11-17 22:26:42.930450] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.461 [2024-11-17 22:26:42.930462] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.461 [2024-11-17 22:26:42.930470] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.461 [2024-11-17 22:26:42.930655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.461 [2024-11-17 22:26:42.930788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.461 [2024-11-17 22:26:42.930895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.461 [2024-11-17 22:26:42.930901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.394 22:26:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.394 22:26:43 -- common/autotest_common.sh@862 -- # return 0 00:26:47.394 22:26:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:47.394 22:26:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.394 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:26:47.394 22:26:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.394 22:26:43 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:47.394 22:26:43 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:47.394 22:26:43 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:47.394 22:26:43 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:47.394 22:26:43 -- scripts/common.sh@312 -- # local nvmes 00:26:47.394 22:26:43 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:47.394 22:26:43 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:47.394 22:26:43 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:47.394 22:26:43 -- scripts/common.sh@297 -- # local bdf= 00:26:47.394 22:26:43 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:47.394 22:26:43 -- scripts/common.sh@232 -- # local class 00:26:47.394 22:26:43 -- scripts/common.sh@233 -- # local subclass 00:26:47.394 22:26:43 -- scripts/common.sh@234 -- # local progif 00:26:47.394 22:26:43 -- scripts/common.sh@235 -- # printf %02x 1 00:26:47.394 22:26:43 -- scripts/common.sh@235 -- # class=01 00:26:47.395 22:26:43 -- scripts/common.sh@236 -- # printf %02x 8 00:26:47.395 22:26:43 -- scripts/common.sh@236 -- # subclass=08 00:26:47.395 22:26:43 -- scripts/common.sh@237 -- # printf %02x 2 00:26:47.395 22:26:43 -- scripts/common.sh@237 -- # progif=02 00:26:47.395 22:26:43 -- scripts/common.sh@239 -- # hash lspci 00:26:47.395 22:26:43 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:47.395 22:26:43 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:47.395 22:26:43 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:47.395 22:26:43 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:47.395 22:26:43 -- scripts/common.sh@244 -- # tr -d '"' 00:26:47.395 22:26:43 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:47.395 22:26:43 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:47.395 22:26:43 -- scripts/common.sh@15 -- # local i 00:26:47.395 22:26:43 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:47.395 22:26:43 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:47.395 22:26:43 -- scripts/common.sh@24 -- # return 0 00:26:47.395 22:26:43 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:47.395 22:26:43 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:47.395 22:26:43 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:47.395 22:26:43 -- scripts/common.sh@15 -- # local i 00:26:47.395 22:26:43 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:47.395 22:26:43 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:47.395 22:26:43 -- scripts/common.sh@24 -- # return 0 00:26:47.395 22:26:43 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:47.395 22:26:43 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:47.395 22:26:43 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:47.395 22:26:43 -- scripts/common.sh@322 -- # uname -s 00:26:47.395 22:26:43 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:47.395 22:26:43 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:47.395 22:26:43 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:47.395 22:26:43 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:47.395 22:26:43 -- scripts/common.sh@322 -- # uname -s 00:26:47.395 22:26:43 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:47.395 22:26:43 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:47.395 22:26:43 -- scripts/common.sh@327 -- # (( 2 )) 00:26:47.395 22:26:43 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:47.395 22:26:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:47.395 22:26:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:47.395 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:26:47.395 ************************************ 00:26:47.395 START TEST spdk_target_abort 00:26:47.395 ************************************ 00:26:47.395 22:26:43 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:47.395 22:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.395 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:26:47.395 spdk_targetn1 00:26:47.395 22:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.395 22:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.395 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:26:47.395 [2024-11-17 22:26:43.954996] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.395 22:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:47.395 22:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.395 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:26:47.395 22:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:47.395 22:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.395 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:26:47.395 22:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:47.395 22:26:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.395 22:26:43 -- common/autotest_common.sh@10 -- # set +x 00:26:47.395 [2024-11-17 22:26:43.987169] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.395 22:26:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:47.395 22:26:43 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:50.678 Initializing NVMe Controllers 00:26:50.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:50.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:50.678 Initialization complete. Launching workers. 00:26:50.678 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10508, failed: 0 00:26:50.678 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1147, failed to submit 9361 00:26:50.678 success 842, unsuccess 305, failed 0 00:26:50.678 22:26:47 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:50.678 22:26:47 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:53.960 Initializing NVMe Controllers 00:26:53.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:53.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:53.960 Initialization complete. Launching workers. 00:26:53.960 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5960, failed: 0 00:26:53.960 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1209, failed to submit 4751 00:26:53.960 success 268, unsuccess 941, failed 0 00:26:53.960 22:26:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:53.960 22:26:50 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:57.243 Initializing NVMe Controllers 00:26:57.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:57.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:57.243 Initialization complete. Launching workers. 00:26:57.243 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 29709, failed: 0 00:26:57.243 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2721, failed to submit 26988 00:26:57.243 success 418, unsuccess 2303, failed 0 00:26:57.243 22:26:53 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:57.243 22:26:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.243 22:26:53 -- common/autotest_common.sh@10 -- # set +x 00:26:57.243 22:26:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.243 22:26:53 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:57.243 22:26:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.243 22:26:53 -- common/autotest_common.sh@10 -- # set +x 00:26:57.809 22:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.809 22:26:54 -- target/abort_qd_sizes.sh@62 -- # killprocess 92873 00:26:57.809 22:26:54 -- common/autotest_common.sh@936 -- # '[' -z 92873 ']' 00:26:57.809 22:26:54 -- common/autotest_common.sh@940 -- # kill -0 92873 00:26:57.809 22:26:54 -- common/autotest_common.sh@941 -- # uname 00:26:57.809 22:26:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:57.809 22:26:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92873 00:26:57.809 killing process with pid 92873 00:26:57.809 22:26:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:57.809 22:26:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:57.809 22:26:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92873' 00:26:57.809 22:26:54 -- common/autotest_common.sh@955 -- # kill 92873 00:26:57.809 22:26:54 -- common/autotest_common.sh@960 -- # wait 92873 00:26:58.068 00:26:58.068 real 0m10.643s 00:26:58.068 user 0m44.041s 00:26:58.068 sys 0m1.603s 00:26:58.068 22:26:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:58.068 ************************************ 00:26:58.068 END TEST spdk_target_abort 00:26:58.068 ************************************ 00:26:58.068 22:26:54 -- common/autotest_common.sh@10 -- # set +x 00:26:58.068 22:26:54 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:58.068 22:26:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:58.068 22:26:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:58.068 22:26:54 -- common/autotest_common.sh@10 -- # set +x 00:26:58.068 ************************************ 00:26:58.068 START TEST kernel_target_abort 00:26:58.068 ************************************ 00:26:58.068 22:26:54 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:58.068 22:26:54 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:58.068 22:26:54 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:58.068 22:26:54 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:58.068 22:26:54 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:58.068 22:26:54 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:58.068 22:26:54 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:58.068 22:26:54 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:58.068 22:26:54 -- nvmf/common.sh@627 -- # local block nvme 00:26:58.068 22:26:54 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:58.068 22:26:54 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:58.068 22:26:54 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:58.068 22:26:54 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:58.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:58.586 Waiting for block devices as requested 00:26:58.586 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:58.586 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:58.586 22:26:55 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:58.586 22:26:55 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:58.586 22:26:55 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:58.586 22:26:55 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:58.586 22:26:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:58.844 No valid GPT data, bailing 00:26:58.844 22:26:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:58.844 22:26:55 -- scripts/common.sh@393 -- # pt= 00:26:58.844 22:26:55 -- scripts/common.sh@394 -- # return 1 00:26:58.844 22:26:55 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:58.844 22:26:55 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:58.844 22:26:55 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:58.844 22:26:55 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:58.844 22:26:55 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:58.844 22:26:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:58.844 No valid GPT data, bailing 00:26:58.844 22:26:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:58.844 22:26:55 -- scripts/common.sh@393 -- # pt= 00:26:58.844 22:26:55 -- scripts/common.sh@394 -- # return 1 00:26:58.844 22:26:55 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:58.844 22:26:55 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:58.844 22:26:55 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:58.844 22:26:55 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:58.844 22:26:55 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:58.844 22:26:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:58.844 No valid GPT data, bailing 00:26:58.844 22:26:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:58.844 22:26:55 -- scripts/common.sh@393 -- # pt= 00:26:58.844 22:26:55 -- scripts/common.sh@394 -- # return 1 00:26:58.844 22:26:55 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:58.844 22:26:55 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:58.844 22:26:55 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:58.844 22:26:55 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:58.844 22:26:55 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:58.844 22:26:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:58.844 No valid GPT data, bailing 00:26:58.845 22:26:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:58.845 22:26:55 -- scripts/common.sh@393 -- # pt= 00:26:58.845 22:26:55 -- scripts/common.sh@394 -- # return 1 00:26:58.845 22:26:55 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:58.845 22:26:55 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:58.845 22:26:55 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:58.845 22:26:55 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:58.845 22:26:55 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:58.845 22:26:55 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:58.845 22:26:55 -- nvmf/common.sh@654 -- # echo 1 00:26:58.845 22:26:55 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:58.845 22:26:55 -- nvmf/common.sh@656 -- # echo 1 00:26:58.845 22:26:55 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:58.845 22:26:55 -- nvmf/common.sh@663 -- # echo tcp 00:26:58.845 22:26:55 -- nvmf/common.sh@664 -- # echo 4420 00:26:58.845 22:26:55 -- nvmf/common.sh@665 -- # echo ipv4 00:26:58.845 22:26:55 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:59.103 22:26:55 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a547cde3-4ce3-4fca-917e-78af6442a671 --hostid=a547cde3-4ce3-4fca-917e-78af6442a671 -a 10.0.0.1 -t tcp -s 4420 00:26:59.103 00:26:59.103 Discovery Log Number of Records 2, Generation counter 2 00:26:59.103 =====Discovery Log Entry 0====== 00:26:59.103 trtype: tcp 00:26:59.103 adrfam: ipv4 00:26:59.103 subtype: current discovery subsystem 00:26:59.103 treq: not specified, sq flow control disable supported 00:26:59.103 portid: 1 00:26:59.103 trsvcid: 4420 00:26:59.103 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:59.103 traddr: 10.0.0.1 00:26:59.103 eflags: none 00:26:59.103 sectype: none 00:26:59.103 =====Discovery Log Entry 1====== 00:26:59.103 trtype: tcp 00:26:59.103 adrfam: ipv4 00:26:59.103 subtype: nvme subsystem 00:26:59.103 treq: not specified, sq flow control disable supported 00:26:59.103 portid: 1 00:26:59.103 trsvcid: 4420 00:26:59.103 subnqn: kernel_target 00:26:59.103 traddr: 10.0.0.1 00:26:59.103 eflags: none 00:26:59.103 sectype: none 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.103 22:26:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:59.104 22:26:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.104 22:26:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:59.104 22:26:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.104 22:26:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:59.104 22:26:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:59.104 22:26:55 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:02.388 Initializing NVMe Controllers 00:27:02.388 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:02.388 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:02.388 Initialization complete. Launching workers. 00:27:02.388 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30422, failed: 0 00:27:02.388 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30422, failed to submit 0 00:27:02.388 success 0, unsuccess 30422, failed 0 00:27:02.388 22:26:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:02.388 22:26:58 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:05.735 Initializing NVMe Controllers 00:27:05.735 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:05.735 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:05.735 Initialization complete. Launching workers. 00:27:05.735 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68080, failed: 0 00:27:05.735 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27712, failed to submit 40368 00:27:05.735 success 0, unsuccess 27712, failed 0 00:27:05.735 22:27:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:05.735 22:27:01 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:09.021 Initializing NVMe Controllers 00:27:09.021 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:09.021 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:09.021 Initialization complete. Launching workers. 00:27:09.021 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 74009, failed: 0 00:27:09.021 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18492, failed to submit 55517 00:27:09.021 success 0, unsuccess 18492, failed 0 00:27:09.021 22:27:05 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:09.021 22:27:05 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:09.021 22:27:05 -- nvmf/common.sh@677 -- # echo 0 00:27:09.021 22:27:05 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:09.021 22:27:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:09.021 22:27:05 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:09.021 22:27:05 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:09.021 22:27:05 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:09.021 22:27:05 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:09.021 00:27:09.021 real 0m10.491s 00:27:09.021 user 0m5.218s 00:27:09.021 sys 0m2.584s 00:27:09.021 22:27:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:09.021 22:27:05 -- common/autotest_common.sh@10 -- # set +x 00:27:09.021 ************************************ 00:27:09.021 END TEST kernel_target_abort 00:27:09.021 ************************************ 00:27:09.021 22:27:05 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:09.021 22:27:05 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:09.021 22:27:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:09.021 22:27:05 -- nvmf/common.sh@116 -- # sync 00:27:09.021 22:27:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:09.021 22:27:05 -- nvmf/common.sh@119 -- # set +e 00:27:09.021 22:27:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:09.021 22:27:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:09.021 rmmod nvme_tcp 00:27:09.021 rmmod nvme_fabrics 00:27:09.021 rmmod nvme_keyring 00:27:09.021 22:27:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:09.021 22:27:05 -- nvmf/common.sh@123 -- # set -e 00:27:09.021 22:27:05 -- nvmf/common.sh@124 -- # return 0 00:27:09.021 22:27:05 -- nvmf/common.sh@477 -- # '[' -n 92873 ']' 00:27:09.021 22:27:05 -- nvmf/common.sh@478 -- # killprocess 92873 00:27:09.021 22:27:05 -- common/autotest_common.sh@936 -- # '[' -z 92873 ']' 00:27:09.021 22:27:05 -- common/autotest_common.sh@940 -- # kill -0 92873 00:27:09.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (92873) - No such process 00:27:09.021 Process with pid 92873 is not found 00:27:09.021 22:27:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 92873 is not found' 00:27:09.021 22:27:05 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:09.021 22:27:05 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:09.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:09.538 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:09.538 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:09.538 22:27:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:09.538 22:27:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:09.538 22:27:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.538 22:27:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:09.538 22:27:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.538 22:27:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:09.538 22:27:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.538 22:27:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:09.538 ************************************ 00:27:09.538 END TEST nvmf_abort_qd_sizes 00:27:09.538 ************************************ 00:27:09.538 00:27:09.538 real 0m24.763s 00:27:09.538 user 0m50.767s 00:27:09.538 sys 0m5.490s 00:27:09.538 22:27:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:09.538 22:27:06 -- common/autotest_common.sh@10 -- # set +x 00:27:09.538 22:27:06 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:09.538 22:27:06 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:09.538 22:27:06 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:09.538 22:27:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:09.538 22:27:06 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:09.538 22:27:06 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:09.538 22:27:06 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:09.538 22:27:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:09.538 22:27:06 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:09.539 22:27:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:09.539 22:27:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:09.539 22:27:06 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:09.539 22:27:06 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:09.539 22:27:06 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:09.539 22:27:06 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:09.539 22:27:06 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:09.539 22:27:06 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:09.539 22:27:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:09.539 22:27:06 -- common/autotest_common.sh@10 -- # set +x 00:27:09.539 22:27:06 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:09.539 22:27:06 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:09.539 22:27:06 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:09.539 22:27:06 -- common/autotest_common.sh@10 -- # set +x 00:27:11.444 INFO: APP EXITING 00:27:11.444 INFO: killing all VMs 00:27:11.444 INFO: killing vhost app 00:27:11.444 INFO: EXIT DONE 00:27:12.011 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:12.011 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:12.269 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:12.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:12.834 Cleaning 00:27:12.834 Removing: /var/run/dpdk/spdk0/config 00:27:12.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:12.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:12.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:12.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:12.834 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:12.834 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:12.834 Removing: /var/run/dpdk/spdk1/config 00:27:12.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:12.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:12.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:12.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:12.834 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:12.834 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:12.834 Removing: /var/run/dpdk/spdk2/config 00:27:12.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:12.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:12.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:13.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:13.093 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:13.093 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:13.093 Removing: /var/run/dpdk/spdk3/config 00:27:13.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:13.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:13.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:13.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:13.093 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:13.093 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:13.093 Removing: /var/run/dpdk/spdk4/config 00:27:13.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:13.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:13.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:13.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:13.093 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:13.093 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:13.093 Removing: /dev/shm/nvmf_trace.0 00:27:13.093 Removing: /dev/shm/spdk_tgt_trace.pid55509 00:27:13.093 Removing: /var/run/dpdk/spdk0 00:27:13.093 Removing: /var/run/dpdk/spdk1 00:27:13.093 Removing: /var/run/dpdk/spdk2 00:27:13.093 Removing: /var/run/dpdk/spdk3 00:27:13.093 Removing: /var/run/dpdk/spdk4 00:27:13.093 Removing: /var/run/dpdk/spdk_pid55351 00:27:13.093 Removing: /var/run/dpdk/spdk_pid55509 00:27:13.093 Removing: /var/run/dpdk/spdk_pid55825 00:27:13.093 Removing: /var/run/dpdk/spdk_pid56105 00:27:13.093 Removing: /var/run/dpdk/spdk_pid56296 00:27:13.093 Removing: /var/run/dpdk/spdk_pid56386 00:27:13.093 Removing: /var/run/dpdk/spdk_pid56485 00:27:13.093 Removing: /var/run/dpdk/spdk_pid56586 00:27:13.093 Removing: /var/run/dpdk/spdk_pid56630 00:27:13.093 Removing: /var/run/dpdk/spdk_pid56660 00:27:13.093 Removing: /var/run/dpdk/spdk_pid56734 00:27:13.093 Removing: /var/run/dpdk/spdk_pid56846 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57484 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57543 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57612 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57640 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57730 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57758 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57837 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57865 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57922 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57952 00:27:13.093 Removing: /var/run/dpdk/spdk_pid57998 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58028 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58187 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58228 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58304 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58379 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58409 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58468 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58487 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58522 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58541 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58576 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58595 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58631 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58649 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58688 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58709 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58742 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58763 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58796 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58817 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58850 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58871 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58905 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58925 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58965 00:27:13.093 Removing: /var/run/dpdk/spdk_pid58979 00:27:13.093 Removing: /var/run/dpdk/spdk_pid59019 00:27:13.093 Removing: /var/run/dpdk/spdk_pid59033 00:27:13.093 Removing: /var/run/dpdk/spdk_pid59073 00:27:13.093 Removing: /var/run/dpdk/spdk_pid59087 00:27:13.093 Removing: /var/run/dpdk/spdk_pid59127 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59147 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59181 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59201 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59237 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59257 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59291 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59311 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59353 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59370 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59413 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59435 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59473 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59498 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59527 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59552 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59582 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59660 00:27:13.352 Removing: /var/run/dpdk/spdk_pid59778 00:27:13.352 Removing: /var/run/dpdk/spdk_pid60222 00:27:13.352 Removing: /var/run/dpdk/spdk_pid67202 00:27:13.352 Removing: /var/run/dpdk/spdk_pid67547 00:27:13.352 Removing: /var/run/dpdk/spdk_pid69964 00:27:13.352 Removing: /var/run/dpdk/spdk_pid70346 00:27:13.352 Removing: /var/run/dpdk/spdk_pid70590 00:27:13.352 Removing: /var/run/dpdk/spdk_pid70633 00:27:13.352 Removing: /var/run/dpdk/spdk_pid70907 00:27:13.352 Removing: /var/run/dpdk/spdk_pid70909 00:27:13.352 Removing: /var/run/dpdk/spdk_pid70967 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71022 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71086 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71124 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71126 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71157 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71194 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71196 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71260 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71318 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71373 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71417 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71424 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71449 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71753 00:27:13.352 Removing: /var/run/dpdk/spdk_pid71905 00:27:13.352 Removing: /var/run/dpdk/spdk_pid72183 00:27:13.352 Removing: /var/run/dpdk/spdk_pid72234 00:27:13.352 Removing: /var/run/dpdk/spdk_pid72623 00:27:13.352 Removing: /var/run/dpdk/spdk_pid73170 00:27:13.352 Removing: /var/run/dpdk/spdk_pid73595 00:27:13.352 Removing: /var/run/dpdk/spdk_pid74574 00:27:13.352 Removing: /var/run/dpdk/spdk_pid75565 00:27:13.352 Removing: /var/run/dpdk/spdk_pid75687 00:27:13.352 Removing: /var/run/dpdk/spdk_pid75747 00:27:13.352 Removing: /var/run/dpdk/spdk_pid77238 00:27:13.352 Removing: /var/run/dpdk/spdk_pid77479 00:27:13.352 Removing: /var/run/dpdk/spdk_pid77930 00:27:13.352 Removing: /var/run/dpdk/spdk_pid78041 00:27:13.352 Removing: /var/run/dpdk/spdk_pid78194 00:27:13.352 Removing: /var/run/dpdk/spdk_pid78234 00:27:13.352 Removing: /var/run/dpdk/spdk_pid78284 00:27:13.352 Removing: /var/run/dpdk/spdk_pid78325 00:27:13.352 Removing: /var/run/dpdk/spdk_pid78494 00:27:13.352 Removing: /var/run/dpdk/spdk_pid78641 00:27:13.352 Removing: /var/run/dpdk/spdk_pid78906 00:27:13.352 Removing: /var/run/dpdk/spdk_pid79023 00:27:13.352 Removing: /var/run/dpdk/spdk_pid79443 00:27:13.352 Removing: /var/run/dpdk/spdk_pid79833 00:27:13.352 Removing: /var/run/dpdk/spdk_pid79835 00:27:13.352 Removing: /var/run/dpdk/spdk_pid82097 00:27:13.352 Removing: /var/run/dpdk/spdk_pid82406 00:27:13.352 Removing: /var/run/dpdk/spdk_pid82925 00:27:13.352 Removing: /var/run/dpdk/spdk_pid82927 00:27:13.352 Removing: /var/run/dpdk/spdk_pid83276 00:27:13.352 Removing: /var/run/dpdk/spdk_pid83290 00:27:13.352 Removing: /var/run/dpdk/spdk_pid83310 00:27:13.352 Removing: /var/run/dpdk/spdk_pid83335 00:27:13.352 Removing: /var/run/dpdk/spdk_pid83350 00:27:13.352 Removing: /var/run/dpdk/spdk_pid83491 00:27:13.352 Removing: /var/run/dpdk/spdk_pid83493 00:27:13.352 Removing: /var/run/dpdk/spdk_pid83601 00:27:13.611 Removing: /var/run/dpdk/spdk_pid83609 00:27:13.611 Removing: /var/run/dpdk/spdk_pid83717 00:27:13.611 Removing: /var/run/dpdk/spdk_pid83719 00:27:13.611 Removing: /var/run/dpdk/spdk_pid84193 00:27:13.611 Removing: /var/run/dpdk/spdk_pid84244 00:27:13.611 Removing: /var/run/dpdk/spdk_pid84395 00:27:13.611 Removing: /var/run/dpdk/spdk_pid84516 00:27:13.611 Removing: /var/run/dpdk/spdk_pid84913 00:27:13.611 Removing: /var/run/dpdk/spdk_pid85160 00:27:13.611 Removing: /var/run/dpdk/spdk_pid85659 00:27:13.611 Removing: /var/run/dpdk/spdk_pid86224 00:27:13.611 Removing: /var/run/dpdk/spdk_pid86701 00:27:13.611 Removing: /var/run/dpdk/spdk_pid86790 00:27:13.611 Removing: /var/run/dpdk/spdk_pid86876 00:27:13.611 Removing: /var/run/dpdk/spdk_pid86967 00:27:13.611 Removing: /var/run/dpdk/spdk_pid87130 00:27:13.611 Removing: /var/run/dpdk/spdk_pid87217 00:27:13.611 Removing: /var/run/dpdk/spdk_pid87313 00:27:13.611 Removing: /var/run/dpdk/spdk_pid87402 00:27:13.611 Removing: /var/run/dpdk/spdk_pid87755 00:27:13.611 Removing: /var/run/dpdk/spdk_pid88461 00:27:13.611 Removing: /var/run/dpdk/spdk_pid89827 00:27:13.611 Removing: /var/run/dpdk/spdk_pid90032 00:27:13.611 Removing: /var/run/dpdk/spdk_pid90324 00:27:13.611 Removing: /var/run/dpdk/spdk_pid90632 00:27:13.611 Removing: /var/run/dpdk/spdk_pid91185 00:27:13.611 Removing: /var/run/dpdk/spdk_pid91201 00:27:13.611 Removing: /var/run/dpdk/spdk_pid91575 00:27:13.611 Removing: /var/run/dpdk/spdk_pid91731 00:27:13.611 Removing: /var/run/dpdk/spdk_pid91888 00:27:13.611 Removing: /var/run/dpdk/spdk_pid91985 00:27:13.611 Removing: /var/run/dpdk/spdk_pid92148 00:27:13.611 Removing: /var/run/dpdk/spdk_pid92258 00:27:13.611 Removing: /var/run/dpdk/spdk_pid92942 00:27:13.611 Removing: /var/run/dpdk/spdk_pid92976 00:27:13.611 Removing: /var/run/dpdk/spdk_pid93013 00:27:13.611 Removing: /var/run/dpdk/spdk_pid93256 00:27:13.611 Removing: /var/run/dpdk/spdk_pid93296 00:27:13.611 Removing: /var/run/dpdk/spdk_pid93327 00:27:13.611 Clean 00:27:13.611 killing process with pid 49744 00:27:13.611 killing process with pid 49747 00:27:13.611 22:27:10 -- common/autotest_common.sh@1446 -- # return 0 00:27:13.611 22:27:10 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:13.611 22:27:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.611 22:27:10 -- common/autotest_common.sh@10 -- # set +x 00:27:13.870 22:27:10 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:13.870 22:27:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.870 22:27:10 -- common/autotest_common.sh@10 -- # set +x 00:27:13.870 22:27:10 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:13.870 22:27:10 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:13.870 22:27:10 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:13.870 22:27:10 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:13.870 22:27:10 -- spdk/autotest.sh@383 -- # hostname 00:27:13.870 22:27:10 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:14.128 geninfo: WARNING: invalid characters removed from testname! 00:27:36.057 22:27:31 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:38.590 22:27:34 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:41.121 22:27:37 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:43.024 22:27:39 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:45.597 22:27:41 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:47.500 22:27:44 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:50.032 22:27:46 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:50.032 22:27:46 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:50.032 22:27:46 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:50.032 22:27:46 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:50.032 22:27:46 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:50.032 22:27:46 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:50.032 22:27:46 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:50.032 22:27:46 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:50.032 22:27:46 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:50.032 22:27:46 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:50.032 22:27:46 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:50.032 22:27:46 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:50.032 22:27:46 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:50.032 22:27:46 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:50.032 22:27:46 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:50.032 22:27:46 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:50.032 22:27:46 -- scripts/common.sh@343 -- $ case "$op" in 00:27:50.032 22:27:46 -- scripts/common.sh@344 -- $ : 1 00:27:50.032 22:27:46 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:50.032 22:27:46 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.032 22:27:46 -- scripts/common.sh@364 -- $ decimal 1 00:27:50.032 22:27:46 -- scripts/common.sh@352 -- $ local d=1 00:27:50.032 22:27:46 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:50.032 22:27:46 -- scripts/common.sh@354 -- $ echo 1 00:27:50.032 22:27:46 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:50.032 22:27:46 -- scripts/common.sh@365 -- $ decimal 2 00:27:50.032 22:27:46 -- scripts/common.sh@352 -- $ local d=2 00:27:50.032 22:27:46 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:50.032 22:27:46 -- scripts/common.sh@354 -- $ echo 2 00:27:50.032 22:27:46 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:50.032 22:27:46 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:50.032 22:27:46 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:50.032 22:27:46 -- scripts/common.sh@367 -- $ return 0 00:27:50.032 22:27:46 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.032 22:27:46 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:50.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.032 --rc genhtml_branch_coverage=1 00:27:50.032 --rc genhtml_function_coverage=1 00:27:50.032 --rc genhtml_legend=1 00:27:50.032 --rc geninfo_all_blocks=1 00:27:50.032 --rc geninfo_unexecuted_blocks=1 00:27:50.032 00:27:50.032 ' 00:27:50.032 22:27:46 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:50.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.032 --rc genhtml_branch_coverage=1 00:27:50.032 --rc genhtml_function_coverage=1 00:27:50.032 --rc genhtml_legend=1 00:27:50.032 --rc geninfo_all_blocks=1 00:27:50.032 --rc geninfo_unexecuted_blocks=1 00:27:50.032 00:27:50.032 ' 00:27:50.032 22:27:46 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:50.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.032 --rc genhtml_branch_coverage=1 00:27:50.032 --rc genhtml_function_coverage=1 00:27:50.032 --rc genhtml_legend=1 00:27:50.032 --rc geninfo_all_blocks=1 00:27:50.032 --rc geninfo_unexecuted_blocks=1 00:27:50.032 00:27:50.032 ' 00:27:50.032 22:27:46 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:50.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.032 --rc genhtml_branch_coverage=1 00:27:50.032 --rc genhtml_function_coverage=1 00:27:50.032 --rc genhtml_legend=1 00:27:50.032 --rc geninfo_all_blocks=1 00:27:50.032 --rc geninfo_unexecuted_blocks=1 00:27:50.032 00:27:50.032 ' 00:27:50.032 22:27:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:50.032 22:27:46 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:50.032 22:27:46 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.032 22:27:46 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.033 22:27:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.033 22:27:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.033 22:27:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.033 22:27:46 -- paths/export.sh@5 -- $ export PATH 00:27:50.033 22:27:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.033 22:27:46 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:50.033 22:27:46 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:50.033 22:27:46 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731882466.XXXXXX 00:27:50.033 22:27:46 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731882466.IiDa6U 00:27:50.033 22:27:46 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:50.033 22:27:46 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:27:50.033 22:27:46 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:50.033 22:27:46 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:50.033 22:27:46 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:50.033 22:27:46 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:50.033 22:27:46 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:50.033 22:27:46 -- common/autotest_common.sh@10 -- $ set +x 00:27:50.033 22:27:46 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:27:50.033 22:27:46 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:50.033 22:27:46 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:50.033 22:27:46 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:50.033 22:27:46 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:50.033 22:27:46 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:50.033 22:27:46 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:50.033 22:27:46 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:50.033 22:27:46 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:50.033 22:27:46 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:50.291 22:27:46 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:50.291 + [[ -n 5221 ]] 00:27:50.291 + sudo kill 5221 00:27:50.300 [Pipeline] } 00:27:50.318 [Pipeline] // timeout 00:27:50.323 [Pipeline] } 00:27:50.338 [Pipeline] // stage 00:27:50.343 [Pipeline] } 00:27:50.358 [Pipeline] // catchError 00:27:50.367 [Pipeline] stage 00:27:50.369 [Pipeline] { (Stop VM) 00:27:50.381 [Pipeline] sh 00:27:50.662 + vagrant halt 00:27:53.224 ==> default: Halting domain... 00:27:59.811 [Pipeline] sh 00:28:00.089 + vagrant destroy -f 00:28:02.625 ==> default: Removing domain... 00:28:02.895 [Pipeline] sh 00:28:03.175 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:03.185 [Pipeline] } 00:28:03.200 [Pipeline] // stage 00:28:03.205 [Pipeline] } 00:28:03.219 [Pipeline] // dir 00:28:03.225 [Pipeline] } 00:28:03.239 [Pipeline] // wrap 00:28:03.245 [Pipeline] } 00:28:03.259 [Pipeline] // catchError 00:28:03.268 [Pipeline] stage 00:28:03.271 [Pipeline] { (Epilogue) 00:28:03.284 [Pipeline] sh 00:28:03.567 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:08.850 [Pipeline] catchError 00:28:08.852 [Pipeline] { 00:28:08.868 [Pipeline] sh 00:28:09.153 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:09.153 Artifacts sizes are good 00:28:09.162 [Pipeline] } 00:28:09.177 [Pipeline] // catchError 00:28:09.189 [Pipeline] archiveArtifacts 00:28:09.196 Archiving artifacts 00:28:09.322 [Pipeline] cleanWs 00:28:09.335 [WS-CLEANUP] Deleting project workspace... 00:28:09.335 [WS-CLEANUP] Deferred wipeout is used... 00:28:09.342 [WS-CLEANUP] done 00:28:09.344 [Pipeline] } 00:28:09.361 [Pipeline] // stage 00:28:09.367 [Pipeline] } 00:28:09.383 [Pipeline] // node 00:28:09.389 [Pipeline] End of Pipeline 00:28:09.428 Finished: SUCCESS